Books  
comments_image Comments

Austerity and Debt Conspire to Wreck the Lives of Working American Families

Working families are sandbagged by the mortgage debt and, retirees can’t get decent returns on their investments.
 
 
Share

Photo Credit: Shutterstock.com/Carlos Caetano

 
 
 
 

The following is an excerpt from from Debtors' Prison by Robert Kuttner (Knopf 2013).

In this, the fifth year of a prolonged downturn triggered by a financial crash, the prevailing view is that we all must pay for yesterday’s excess. This case is made in both economic and moral terms. Nations and households ran up unsustainable debts; these obligations must be honored—to satisfy creditors, restore market confidence, deter future recklessness, and compel people and nations to live within their means.

A phrase often heard is “moral hazard,” a concept borrowed by economists from the insurance industry. In its original usage, the term referred to the risk that insuring against an adverse event would invite the event. For example, someone who insured a house for more than its worth would have an incentive to burn it down. Nowadays, economists use the term to mean any unintended reward for bad behavior. Presumably, if we give debt relief to struggling homeowners or beleaguered nations, we invite more profligacy in the future. Hence, belts need to be tightened not just to improve fiscal balance but as punishment for past misdeeds and inducement for better self-discipline in the future.

There are several problems with the application of the moral hazard doctrine to the present crisis. It’s certainly true that under normal circumstances debts need to be honored, with bankruptcy reserved for special cases. Public policy should neither encourage governments, households, enterprises, or banks to borrow beyond prudent limits nor make it too easy for them to walk away from debts. But after a collapse, a debt overhang becomes a macroeconomic problem, not a personal or moral one. In a deflated economy, debt burdens undermine both debtors’ capacity to pay and their ability to pursue productive economic activity. Intensified belt-tightening deepens depression by further undercutting purchasing power generally. Despite facile analogies between governments and households, government is different from other actors. In a depression, even with high levels of public debt, additional government borrowing and spending may be the only way to jump-start the economy’s productive capacity at a time when the private sector is too traumatized to invest and spend.

The idea that anxiety about future deficits harms investor or consumer confidence is contradicted by both economic theory and evidence. At this writing, the U.S. government is able to borrow from private money markets for ten years at interest rates well under 2 percent and for thirty years at less than 3 percent. If markets were concerned that higher deficits five or even twenty-five years from now would cause rising inflation or a weaker dollar, they would not dream of lending the government money for thirty years at 3 percent interest. Consumers are reluctant to spend and businesses hesitant to invest because of reduced purchasing power in a weak economy. Abstract worries about the federal deficit are simply not part of this calculus.

“Living within one’s means” is an appealing but oversimplified metaphor. Before the crisis, some families and nations did borrow to finance consumption—a good definition of living beyond one’s means. But this borrowing was not the prime cause of the crisis. Today, far larger numbers of entirely prudent people find themselves with diminished means as a result of broader circumstances beyond their control, and bad policies compound the problem.

After a general collapse, one’s means are influenced by whether the economy is growing or shrinking. If I am out of work, with depleted income, almost any normal expenditure is beyond my means. If my lack of a job throws you out of work, soon you are living beyond your means, too, and the whole economy cascades downward. In an already depressed economy, demanding that we all live within our (depleted) means can further reduce everyone’s means. If you put an entire nation under a rigid austerity regime, its capacity for economic growth is crippled. Even creditors will eventually suffer from the distress and social chaos that follow.

Take a closer look at moral hazard ex ante from ex post and you will find that blame is widely attributed to the wrong immoralists. Governments and families are being asked to accept austerity for the common good. Yet the prime movers of the crisis were bankers who incurred massive debts in order to pursue speculative activities. The weak reforms to date have not changed the incentives for excessively risky banker behaviors, which persist.

The best cure for moral hazard is the proverbial ounce of prevention. Moral hazard was rampant in the run-up to the crash because the financial industry was allowed to make wildly speculative bets and to pass along risks to the rest of the society. Yet in its aftermath, this financial crisis is being treated more as an object lesson in personal improvidence than as a case for drastic financial reform.

Austerity and Its Alternatives

The last great financial collapse, by contrast, transformed America’s economics. First, however, the Roosevelt administration needed to transform politics. FDR’s reforms during the Great Depression constrained both the financial abuses that caused the crash of 1929 and the political power of Wall Street. Deficit-financed public spending under the New Deal restored growth rates but did not eliminate joblessness. The much larger spending of World War II—with deficits averaging 26 percent of gross domestic product for each of the four war years—finally brought the economy back to full employment, setting the stage for the postwar recovery.

By the war’s end, the U.S. government’s public debt exceeded 120 per- cent of GDP, almost twice today’s ratio. America worked off that debt not by tightening its belt but by liberating the economy’s potential. In 1945, there was no panel like President Obama’s Bowles-Simpson commission targeting the debt ratio a decade into the future and commending ten years of budget cuts. Rather, the greater worry was that absent the stimulus of war and with twelve million newly jobless GIs returning home, the civilian economy would revert to depression. So America doubled down on its public investments with programs like the GI Bill and the Marshall Plan. For three decades, the economy grew faster than the debt, and the debt dwindled to less than 30 percent of GDP. Finance was well regulated so that there was no speculation in the public debt. The Department of the Treasury pegged the rate that the government would pay for its bonds at an affordable 2.5 percent. The Federal Reserve Board provided liquidity as necessary.

The Franklin Roosevelt era ushered in an exceptional period in the dismal history of debt politics. Not only were banks well regulated, but the government used innovative public institutions such as the Reconstruction Finance Corporation to recapitalize banks and industrial enterprises and the Home Owners’ Loan Corporation to refinance home mortgages. Chastened by the catastrophe of the reparations extracted from Germany after World War I, the victorious Allies in 1948 wrote off nearly all of the Nazi debt so that the German economy could recover and then sweetened the pot with Marshall Plan aid. Globally, the Bretton Woods accord created a new international monetary system that limited the power of private financiers, offered new public forms of credit, and biased the financial system toward economic expansion. This story is told in detail in the chapters that follow.

In 1936, John Maynard Keynes provocatively called for “the euthanasia of the rentier.” He meant that once an economy was stabilized into a high-growth regime of managed capitalism, combining low real interest rates with strictures against speculation, and using macroeconomic management of the business cycle to maintain full employment, capital markets would efficiently and even passively channel financial investment into productive enterprise. In such a world, there would still be innovative entrepreneurs, but the parasitic role of a purely financial class reaping immense profits from the manipulation of paper would dwindle to insignificance. Legitimate passive investors—pension funds, life insurance companies, small savers, and the proverbial trust accounts of widows and orphans—would reap decent returns, but there would be neither windfalls for the financial middlemen nor catastrophic risks imposed by them on the rest of the economy. Stripped of the hyperbole, this picture describes the orderly but dynamic economy of the 1940s, 1950s, and 1960s, a time when finance was harnessed to the public interest, true innovators were rewarded, most investors earned merely normal returns, and windfall speculative profits were not available—because the rules of the game gave priority to investment in the real productive economy.

In today’s economy, which is dominated by high finance, small debtors and small creditors are on the same side of a larger class divide. The economic prospects of working families are sandbagged by the mortgage debt overhang. Meanwhile, retirees can’t get decent returns on their investments because central banks have cut interest rates to historic lows to prevent the crisis from deepening. Yet the paydays of hedge fund managers and of executives of large banks that only yesterday were given debt relief by the government are bigger than ever. And corporate executives and their private equity affiliates can shed debts using the bankruptcy code and then sail merrily on.

Exaggerated worries about public debt are a staple of conservative rhetoric in good times and bad. Many misguided critics preached austerity even during the Great Depression. As banks, factories, and farms were failing in a cumulative economic collapse, Andrew Mellon, one of America’s richest men and Treasury secretary from 1921 to 1932, famously advised President Hoover to “liquidate labor, liquidate stocks, liquidate farmers, liquidate real estate . . . it will purge the rottenness out of the system. High costs of living and high living will come down. People will work harder, live a more moral life.” The sentiments, which today sound ludicrous against the history of the Depression, are not so different from those being solemnly expressed by the U.S. austerity lobby or the German Bundesbank.

The Great Conflation

Austerity economics conflates several kinds of debt, each with its own causes, consequences, and remedies. The reality is that public debt, financial industry debt, consumer debt, and debt owed to foreign creditors are entirely different creatures.

The prime nemesis of the conventional account is government debt. Public borrowing is said to crowd out productive private investment, raise interest rates, and risk inflation. At some point, the nation goes broke paying interest on past debt, the world stops trusting the dollar, and we end up like Greece or Weimar Germany. Deficit hawks further conflate current increases in the deficit caused by the recession itself with projected deficits in Social Security and Medicare. Supposedly, cutting Social Security benefits over the next decade or two will restore financial confidence now. Since businesses don’t base investment decisions on such projections, those claims defy credulity.

Until the collapse of 2008, most government debts were manageable. Spain and Ireland, two of the alleged sinner nations, actually had low ratios of debt to gross domestic product. Ireland ran up its public debt bailing out the reckless bets of private banks. Spain suffered the consequences of a housing bubble, later exacerbated by a run on its government bonds. The United States had a budget surplus and a sharply declining debt-to-GDP ratio as recently as 2001. In that year, thanks to low unemployment and increasing payroll tax revenues, Social Security’s reserves were projected to increase faster than the claims of retirees. (More on Social Security in chapter 3.)

The U.S. debt ratio rose between 2001 and 2008 because of two wars and gratuitous tax cuts for the wealthy, not because of an excess of social generosity. The deficit then spiked mainly because of a dramatic falloff in government revenues as a result of the recession itself. The sharp increase in government debt was the effect of the collapse, not the cause.

The United States and other nations had far higher ratios of public debt to GDP at different points in their histories, and those debts did not prevent prosperity—as long as other sensible policies were followed. Britain’s debt was well over 200 percent of GDP after the Napoleonic Wars, on the eve of the Industrial Revolution. It rose to more than 260 percent at the end of World War II, a period that ushered in the British economy’s best three decades of performance since before World War I.

Along with government borrowing, consumer debt is the other villain of the orthodox account. Supposedly, people went on a borrowing binge to finance purchases they couldn’t afford, and now the piper must be paid. This contention is a half-truth that leaves out two key details.

One is the worsening economic situation of ordinary families. In the first three decades after World War II, wages rose in lockstep with productivity. As the economy, on average, became more prosperous, that prosperity was broadly shared. American consumers took out mortgages to buy homes (with very low default rates) but engaged in little other borrowing. However earnings stagnated in the 1970s, and that trend worsened after 2001. Nearly all the productivity gains of the economy went to the top 1 percent.

Wages began to lag because of changes in America’s social contract. Unions were weakened. Good unemployment insurance and other government support of workers’ bargaining power eroded. High unemployment created pressure to cut wages. Corporations that had once been benignly paternalistic became less loyal to their employees. Deregulation undermined stable work arrangements. Globalization on corporate terms made it easier for employers to look for cheaper labor abroad. (See chapter 2 for more on lagging wages.)

During this same period, housing values began to increase faster than the rate of inflation, as interest rates steadily fell after 1982. Many critics ascribe the housing bubble to the subprime scandal, but in fact subprime loans accounted for just the last few puffs. The rise in prices mostly reflected the fact that standard mortgages kept getting cheaper, thanks to a climate of declining interest rates. Low-interest mortgage loans meant that more people could become homeowners and that existing homeowners could afford more expensive houses. With 30-year mortgages at 8 percent, a $2,000 monthly payment finances about a $275,000 home. Cut mortgage rates to 4 percent and the same payment buys a $550,000 home. Low interest rates bid up housing prices. And the higher the paper value of a home, the more one can borrow against it. (It’s possible to temper asset bubbles with regulatory measures, such as varying down-payments or cracking down on risky mortgage products. But the Fed has resisted using these powers.)

The combination of these two trends—declining real wages and inflated asset prices—led the American middle class to use debt as a substitute for income. People lacked adequate earnings but felt wealthier. A generation of Americans grew accustomed to borrowing against their homes to finance consumption, and banks were more than happy to be their enablers. In my generation, second mortgages were considered highly risky for homeowners. The financial industry rebranded them as home equity loans, and they became ubiquitous. Third mortgages, even riskier, were marketed as “home equity lines of credit.”

State legislatures, meanwhile, paid for tax cuts by reducing funding for public universities. To make up the difference, they raised tuition. Federal policy increasingly substituted loans for grants. In 1980, federal Pell grants covered 77 percent of the cost of attending a public university. By 2012, this was down to 36 percent. Nominally public state universities are now only 20 percent funded by legislatures, and their tuition has trebled since 1989. By the end of 2011, the average student debt was $25,250. In mid-2012, total outstanding student loan debt passed a trillion dollars, leaving recent graduates weighed down with debt before their economic lives even began. This borrowing is anything but frivolous. Students without affluent parents have little alternative to these debts if they want college degrees. But as monthly payments crowd out other consumer spending, the macroeconomic effect is to add one more drag to the recovery

Had Congress faced the consequences head-on, it is hard to imagine a deliberate policy decision to sandbag the life prospects of the next generation. But this is what legislators at both the federal and state levels, in effect, did by stealth. They cut taxes on well-off Americans, and increased student debts of the non-wealthy young to make up the difference. The real debt crisis is precisely the opposite of the one in the dominant narrative: efficient public investments were cut, imposing inefficient private debts on those who could least afford to carry them.

During this same period, beginning with the Reagan presidency, other government social protections were weakened and employer benefits such as retirement and health plans became less reliable. People were thrown back on what my colleague Tamara Draut calls “the plastic safety net” of credit card borrowing. In short, debt became the economic strategy of struggling workaday Americans. For the broad middle class, the ratio of debt to income increased from 67 percent in 1983 to 157 percent in 2007. Mortgage debt on owner-occupied homes increased from 29 percent to 47 percent of the value of the house. When housing values collapsed, debt ratios increased further.

From the 1940s through the 1970s—a period when real wages and homeownership rates steadily rose—the habit of the first postwar generation had been to pay down mortgages until homes were owned free and clear and then to use the savings to help finance retirement. By contrast, the custom of the financially strapped second postwar generation, who came of age in the 1970s, 1980s, and 1990s, was to keep refinancing their mortgages, often taking out cash with a second mortgage as well.

Increasingly, young adults facing income shortfalls turned to credit cards and other forms of short-term borrowing. By 2001, the average household headed by someone between twenty-five and thirty-four carried credit card debt of over $4,000—twice as much as in 1989—and was devoting a quarter of its income to interest payments. As Senator Elizabeth Warren of Massachusetts has documented, most of the debt increase went to life’s basic necessities, not luxuries. As health insurance coverage dwindled, the biggest single category was medical debt.

As a matter of macroeconomics, the practice of borrowing against assets sustained consumption in the face of flat or falling wages—until the music stopped. When housing prices began to tumble, the use of debt to finance consumption did not just halt; the process went into reverse as households had to pay down debt. Rising unemployment compounded the damage. Consumer purchasing power took a huge hit, and the economy has yet to recover from this.

According to the Federal Reserve, household net worth declined by 39 percent from 2007 to 2010. The ratio of debt to household income has declined from a peak of 134 percent in 2007 to about 114 percent in 2012, and it is still falling. Borrowing to sustain consumption is no longer viable.

After the fact, it is too facile to cluck that people who suffered declining earnings should have just consumed less. As a long-term proposition, stagnant wages and rising debts were a dubious way to run an economy, but in a short-run depression, paying down net debt only adds to the deflationary drag. The remedy, however, is not to redouble general austerity but to restore household purchasing power and decent wages with a strong recovery.

The real villain of the story is financial industry debt. During the boom years, investment banks, hedge funds, commercial banks with “off-balance-sheet” liabilities, and lightly regulated hybrids such as the insurance giant American International Group (AIG) were typically operating with leverage ratios of 30 to 1 and in some cases of more than 50 to 1. “Leverage” is a polite word for borrowing. In plain English, they borrowed fifty dollars for every one dollar of their own capital. They incurred immense debts, substantially in very short-term money-market loans that had to be refinanced daily. In the case of AIG, which underwrote credit default swaps (a kind of insurance but with no reserves against loss), the leverage was literally infinite. When panic set in, the access to credit dried up in a matter of days.

With the collusion of credit rating agencies that blessed their opaque and risky securities with triple-A ratings, these financial engineers sold their toxic products to investors around the world. Sometimes the financial engineers even borrowed money to bet against the same securities they created—marketing them as sound investments while they shorted their own creations. When the boom turned out to be a bubble, the highly interconnected financial system crashed, with trillions of dollars in collateral damage to bystanders.