What the UK economy needs is an approach to economic policy that focuses on co-ordinated and concerted investments for prosperity by governments, households, and businesses. As the Labour party develops its policy programme for 2015 and beyond, the role of investment for prosperity needs to be confronted head on.
There’s plenty that doesn’t add up about Pfizer’s claim that the low Irish tax rates it will pay by merging with Allergan are necessary if the company is to fund drug research to stay competitive. Consider that while the pharmaceutical giant was provisioning $2.2 billion for income taxes over the first nine months of this year, it was distributing five times as much – $11.4 billion – to its shareholders, $6.2 billion in stock buybacks and $5.2 billion in dividends. That was 159 percent of its profits over these three quarters.
Such mind-numbing distributions to shareholders are nothing new for Pfizer. The company has been piling stock buybacks on top of dividends since 1985. From January 2001 through September 2015, Pfizer paid out $95.5 billion in buybacks and $87.1 billion in dividends, representing 117 percent of its net income. Meanwhile, it booked $37.1 billion in corporate income taxes to the IRS.
Yet in a Wall Street Journal interview in October, to forestall the public criticism of corporate flight that was bound to come with the upcoming Pfizer-Allergan merger announcement, Pfizer CEO Ian C. Read moaned that its U.S. tax bill puts the company at a “tremendous disadvantage” in global competition. “We’re fighting,” Read said in the interview, “with one hand tied behind our back.” When one looks at Pfizer’s gargantuan distributions to shareholders, however, it is obvious that if Read can’t make use of both hands to secure innovation finance, it is not Uncle Sam who tied the knot.
If Pfizer is cash-constrained, it is far more likely that it is the golden handcuffs of stock-based executive pay that are the source of the problem. In 2014, CEO Read had total direct compensation of $22.6 million, of which 27 percent came from exercising stock options and 50 percent from the vesting of stock awards. The other four highest-paid executives named on Pfizer’s 2015 proxy statement averaged $8.0 million, with 24 percent from stock options and 41 percent from stock awards. Pfizer states that the company has a prime commitment to ” enhancing shareholder value,” a self-serving ideology for executives whose remuneration depends on the company’s stock-price performance. And stock buybacks provide a potent tool for manipulating a company’s stock price for the sake of enhancing executive stock-based pay.
If we accept stock price as a measure of a company’s performance, then compared with the S&P 500 Index, Pfizer has been successful over the years. Yet as a company that is, as Read put in his Wall Street Journal interview, “doing what we need to do to ensure that we can continue to innovate,” Pfizer is a failure, and it is a failure for which American consumers, taxpayers and workers have been paying a very high price. To see why Pfizer’s stock-price performance does not translate into superior economic performance, we need to delve into the inner working of the drug company’s business model. Let’s look at how Pfizer generates the profits that support its mega-distributions to shareholders and its stock-price performance.
Over the past 15 years, Pfizer’s growth has been driven by three major acquisitions: Warner-Lambert in 2000, Pharmacia in 2003, and Wyeth in 2009, each one bringing with it a number of blockbuster drugs. The most lucrative by far has been Lipitor – already a huge blockbuster at Warner-Lambert when Pfizer acquired that company in 2000 – ringing up an annual average of $11.0 billion in sales from 2000 through 2011. But the Lipitor patent expired in 2010, and by 2014 its revenues had fallen to $2.1 billion, although it was still the fifth-best seller among Pfizer’s products.
Pfizer is on the prowl for new blockbusters to fund its buyback habit. In 2014 AstraZeneca, the British-Swedish drug maker with strong sales in cancer drugs, rebuffed Pfizer’s takeover attempt. Now Pfizer has struck gold with Allergan, which owns the mega-seller Botox.
Botox, known for its ability to erase wrinkles, was first approved by the FDA in 1989, 13 years before it was approved for cosmetic treatments. Over half of sales actually go to therapeutic uses such as spasticity, hyperhidrosis and chronic migraine. New therapeutic indications have been found for Botox, resulting in new patent protection that will keep this drug valuable for a very long time.
As has been the case with the takeovers of Warner-Lambert, Pharmacia and Wyeth, Pfizer wants to get its greedy hands on blockbuster drugs that innovative companies have already developed and then, in the name of enhancing shareholder value, milk the acquisitions until the patents run dry. In merging with Allergan, Pfizer will gain from the corporate inversion, but compared with the profits to be generated by Allergan’s existing products, the Irish tax dodge is just icing on the Pfizer-Allergan wedding cake.
From 2010 to 2014, Pfizer’s revenues fell from $67.8 billion to $49.6 billion, mainly because of the expiration of the patents on a number of the company’s blockbuster drugs. Over these four years, it slashed worldwide employment from 110,600 to 78,300. With Read as CEO, R&D spending has declined compared with the previous 15 years.
Whatever its recorded R&D spending, however, Pfizer has long since lost the capability to generate its own drug products. Since 2001 the company has had significant revenues from only four internally developed and originated products, the last one in 2005. In 2010 sales of these four products totaled $3.7 billion, but in part because of expiration of patents on two of the drugs, by 2014 these revenues had slumped to $1.3 billion.
In 2014 the United States was Pfizer’s biggest national market, accounting for 38 percent of revenues. As Pfizer moves to Ireland, the U.S. market will remain critical to its profits not only because of its size but also because, unlike the governments of other major nations, Congress has chosen not to regulate drug prices. Going back decades, U.S. drug prices have been at least double those for the same products in other national markets. And over the past few years, Pfizer, along with many other U.S. pharmaceutical companies, has been aggressively jacking up drug prices even more, as a recent study shows.
Yet whenever Congress has questioned the high prices, major U.S. drug companies say they need the hefty profit margins to fund more R&D expenditures. For Pfizer, that argument may have held some water back in the ’80s. But for the past three decades Pfizer has been using its profits to enrich shareholders. U.S. taxpayers pay extortionate prices for indispensible pharmaceutical drugs so that companies like Pfizer, Merck and J&J can do billions of dollars in buybacks every year to manipulate their companies’ stock prices. And top executives get paid many millions for this financial engineering.
It gets worse. As the drug companies hold U.S. households hostage in our need to consume their products, taxpayers hand over massive amounts of hard-earned pay to support the drug companies’ R&D efforts. From 1938 through 2014, the National Institutes of Health (NIH) spent a total of $927 billion in 2014 dollars on life sciences research, and this year the NIH budget is over $30 billion— funded by taxpayers. Drug companies benefit from all sorts of other protections and subsidies, including those under the Orphan Drug Act of 1983. In Pfizer’s case, Lipitor, its most profitable drug to date, and Botox, its new shareholder-value enhancing therapy, both originated as orphan drugs
Since 2010, Pfizer’s annual sales have plunged by about $20 billion and its employment by more than 40,000 people. But Pfizer’s Read-era profit margins are at a record high for the company while Pfizer’s stock price has soared. If increasing its stock price is Pfizer’s raison d’Ãªtre, then the allocation of more than 100 percent of profits to “enhancing shareholder value” through buybacks and dividends has worked – but at a huge cost to American innovation, employment and income distribution.
In manufacturing plants all over the world, both managers and workers have discovered that when employees are involved in workplace decision-making, productivity rises. So in the United States, it made national news when on Feb. 14, 2014 workers at the Volkswagen auto plant in Chattanooga, Tennessee rejected representation by the United Automobile Workers by a vote 712 to 626.
Unfortunately, the Chattanooga workers said no to just the type of employee involvement in productivity improvement that will be necessary to sustain their jobs going forward. To compete on the world stage, a strong employee voice in the workplace matters.
The UAW’s Chattanooga campaign would have made Volkswagen the very first foreign car company to a have a unionized plant in the U.S. More importantly, a victory for the UAW was a precondition for the creation of a works council at the Chattanooga plant — a form of worker-management plant-level collaboration for improving manufacturing productivity that is a fixture of German industrial relations, but virtually unknown in the U.S. Through information-sharing and problem-solving, the managers and employees on a works council improve product quality, speed up production processes and reduce materials waste. It's a win-win.
If American workers want to ensure the competitiveness of their manufacturing jobs, they should jump at the chance of instituting this type of forward-looking arrangement, one that enables their voice to influence the productivity of the work that they do. A large body of evidence shows that the involvement of workers in enhancing productivity increases both the earnings of workers and the competitiveness of the products that they produce. Our forthcoming book, Corporate Governance, Employee Voice and Work Organization: Sustaining High-Road Jobs in the Automotive Supply Industry (Oxford University Press, 2014), co-authored with Inge Lippert of the Confederation of German Trade Unions and Ulrich JÃ¼rgens of Science Center-Berlin, provides fresh evidence of the importance of worker involvement in the productivity improvements that contribute to making their own jobs, and the companies for which they work, competitive on a global scale.
Based on in-depth case studies of automotive supply companies in Germany, Sweden, and the U.S., our book compares governance regimes and work organization at the plant level. We find that the automotive supply industry requires creativity and learning from its workers to generate products that are competitive in terms of both quality and cost. The ability and incentive of workers to use their insights and intelligence to contribute to productivity improvements depends on the organization of work at the plant level. High-performance workplaces, characterized by “high road” jobs in which productivity improvements and pay increases go hand in hand, are critical to sustained competitive advantage.
Our research reveals the important role of employee voice mechanisms in high-road work designs, not just in German and Swedish automotive suppliers where worker involvement is formally recognized, but also in supplier firms in the U.S. where productivity is generally viewed as solely management’s concern. Employee representation in strategic decision-making substitutes a stakeholder approach for the one-sided emphasis on "maximizing shareholder value”—a flawed ideology embraced by business schools over the past 25 years in which all that matters is the company’ stock price. Our research confirms, and helps to explain, a larger body of industrial experience that shows that compromises between the financial interests of shareholders and the productive interests of employees have had considerable success in continental Europe. To succeed in global competition, the U.S. automobile industry needs more, not less, employee voice.
Different nations favor different forms of employee voice. German firms have works councils at the plant level, and in companies with 2,000 or more employees, under the system known as co-determination, equal representation of workers and shareholders on the board of directors, plus one neutral seat. Whether at the plant level or the board level, change requires workers’ input and consent.
In Sweden, union representatives have the more prominent role. Since 1976, Swedish companies have been regulated by an act of parliament, the Co-determination Act, stipulating that company management must consult unions prior to taking decisions on major changes such as corporate reorganization, new work conditions, or the introduction of new technology. Although this requirement ultimately does not displace managerial prerogative, it does give time for unions to investigate the matters being decided and consult with members at central and plant levels prior to decisions being made. In practice, the co-determination regulations are codified in collective agreements across the companies concerned. Not only do these arrangements for employee voice allow for better road-tested decisions in firms; they also confer greater legitimacy on the management of change. For this reason, initial employer opposition to this form of employee voice has now given way to broad acceptance.
In the U.S., in publicly listed companies, the ideology of maximizing shareholder value reigns supreme, even though, as one of us has shown, it is an ideology that enables top executives and corporate raiders to extract value from companies at the expense of value creation. Nevertheless, one of the U.S. companies that we studied had a 100 percent Employee Owned Stock Plan (ESOP) in which the scope for self-dealing by top executives is much more constrained. And in two publicly listed U.S. companies, plant managers had instituted programs for tapping workers’ knowledge, with UAW members involved in one of the cases. Indeed, the workers at the unionized plant had only recently elected to have the UAW represent them because of the protection that it afforded against their jobs being shipped overseas.
The UAW has been seeking to become more proactive in questions of labor’s voice in productivity improvement. UAW president Bob King assumed his position four years ago, coming off his work as UAW vice-president in structuring wage concessions to Ford Motor Company that helped to keep it solvent in the 2008-2010 automotive industry crisis, while General Motors and Chrysler went bankrupt and had to be bailed out by taxpayers. But as King made the cost-cutting bargains at Ford, he also placed worker-management productivity agreements on the agenda as a sustainable way to keep automobile plants competitive in the United States.
UAW rules hold that King, now age 67, cannot run for re-election as UAW president at the end of his term in June of this year. But the "high road" drive to improve the productivity of manufacturing jobs rather than pursue the “low-road” alternative of cutting workers’ wages needs to transcend his presidency. The evidence in our book shows that to sustain high-road jobs while maintaining workers’ standards of living in advanced economies such as Germany, Sweden and the United States, workers’ voice in improving competitiveness needs to extend beyond the plant level to include restraints on corporate financial policies that enable rapacious company executives and Wall Street predators to appropriate corporate cash while leaving workers with low pay or out of work.
German-style works councils are by no means a complete solution to the problem of generating competitive products in high-wage nations. As a foundation for engaging workers in the process of productivity improvement, however, the “high road” alternative presented by the VW Chattanooga union election was a choice that American workers should have embraced.
Americans are understandably upset about profits without prosperity. Corporate executives seem to be the big winners, while the middle class is declining and young people face a bleak economic future. How did this happen? It's easy to blame technology, especially the automation that supposedly displaces workers. But that's not the real story. The fact is that automation creates jobs. It's the misuse of corporate profits that is destroying them.
There was a time when high corporate profits meant bright employment prospects for most members of the US labor force. That relation between profits and prosperity was strongest in the immediate post-World War II decades when US corporations led the world in manufacturing, provided workers with career-long employment security, and reinvested profits in productive capabilities in the United States. For the past three decades, however, the pursuit of corporate profits has been at the expense of prosperity for an ever-growing proportion of the American population.
This disconnect between profits and prosperity began in the 1980s with permanent plant closings that cost production workers their middle-class jobs. It increased in the 1990s as major US corporations scrapped the career-with-one-company norm that had prevailed for salaried employees, and it became common even for college-educated people with a couple of decades of work experience to find themselves on the wrong end of the pink slip. Then in the 2000s, as US corporations accelerated the globalization of production activities, the jobs of all members of the US labor force, no matter what their level of educational attainment, became vulnerable to competition from qualified people in lower wage areas of the world.
Profits without prosperity is now starting to get attention in the mainstream press. In his New York Times op-ed, “Robots and Robber Barons” (Dec. 9, 2012), Paul Krugman seeks to explain why, with corporate profits up, labor compensation is down. As part of the ongoing digital revolution, he argues, robots are throwing American workers out of their jobs. In addition, he claims that corporations are making high profits through price gouging, and are not sharing these gains with their employees.
Krugman is on to something important that needs to become part of the national policy debate. But he is off target in blaming a combination of automation and monopolistic practices for the disconnect between profits and prosperity.
Automation is not the problem. As part of a process that could reconnect profits and prosperity, the US economy needs more, not less, corporate investment in automation. A company that successfully invests in automation creates far more, and typically better, jobs than those it destroys. Indeed, the study of industrial history reveals that when a nation’s leading companies fail to make sufficient investments in automation its economy runs into trouble.
As Krugman himself notes, the argument that automation is bad for workers’ employment and incomes dates back almost two centuries to the British economist, David Ricardo, who was writing during the world’s first industrial revolution. By definition, automation displaces the need for workers to perform the tasks that have been automated. If, however, automation only destroyed jobs, advanced economies such as those of Britain, France, Germany, Italy, Japan, and the United States would not have risen to positions of world industrial leadership with strong middle classes.
Some of these new jobs are created in the industries that produce automated equipment. By far Japan is the world leader in both the production and use of robotics. An original source of Japan’s competitive advantage in this capital-goods sector was the willingness and ability of production workers to cooperate with engineers in automating tasks they performed on the shop floor. Under Japan’s system of “lifetime employment,” these production workers did not fear that the introduction of robots would result in loss of employment, while their involvement in the automation process gave them experience that, post-automation, could be put to productive use in other parts of the business organization.
Increasingly, moreover, in the age of nanotechnology, automation performs productive functions that no human being could ever have possibly done. Rather than destroy jobs, these automated processes make it possible for companies to produce all kinds of sophisticated goods and services. These products are the hallmark of an advanced economy, and open up all kinds of new employment opportunities in companies and countries in which these goods and services are produced.
Automation entails huge upfront investments. Companies that invest in automation have to build organizations to ensure steady supplies of high-quality materials, improve and maintain machinery, and capture sufficiently large market shares to achieve economies of scale. These investments in the development and utilization of automated facilities create lots of high-value-added jobs, especially for companies that, because of their investments, can grow large by producing higher quality, lower cost products than the competition.
To repeat, automation is not the problem. The three-decades long erosion of middle-class jobs in the United States is the result of, as stated earlier, permanent plant closings, layoffs of older employees, and the globalization of employment – none of which have been the result of automation. In the process, many US industrial corporations have become very profitable (for now, but by no means forever). The question that needs to be asked is why US corporations are failing to reinvest these profits in new products and processes that can create large numbers of new high value-added employment opportunities in the United States.
The problem lies in the ideology that corporations should be governed to “maximize shareholder value,” which became prevalent in boardrooms and business schools in the 1980s, and has become totally dominant since. In the name of shareholder value over the decade 2001-2010, the 500 corporations in the S&P 500 Index (representing about 75 percent of US stock-market capitalization) expended not only 40 percent of their profits on cash dividends – the normal mode of rewarding shareholders – but also another 54 percent on stock buybacks, the purpose of which is to give a manipulative boost to a company’s own stock price. Large established companies did hardly any buybacks in the early 1980s. Over the past decade, buybacks by S&P 500 companies totaled about $3 trillion, which has left scant corporate resources for investment in innovation and high-value-added job creation.
When companies do massive buybacks to boost their own stock prices, the big winners are the very same top executives who make these resource-allocation decisions. Why? Because the largest single component of top executive pay is the income from exercising stock options – which become more lucrative when the stock price goes up, even if for just a short period of time during which the options can be exercised and the acquired stock sold.
Many corporate executives justify buybacks by arguing that they represent the best corporate investments available. How about investments in innovation and job creation? Or how about corporate support for government investments in the national knowledge base, which typically provides the foundation for enterprise innovation and profits? If top executives have been the big winners of this financialized buybacks-options game, then the big losers have been erstwhile members of the US middle class as well as tens of millions of younger Americans who will never have the opportunity of entering the middle class.
Fundamental to the achievement of economic prosperity are investments in physical infrastructures and human capabilities. These investments are essential to generate well-paid, employment opportunities in the domestic economy and competitive advantage in the global economy. In a world of changing technologies and emerging markets, a nation that fails to invest for the future on a continuing basis can look forward to long-term economic decline.
Investments for prosperity are not solely the responsibility of the business sector. Governments and households have to invest as well. Governments invest in physical infrastructures – for example roads, schools, and defence – that have the character of public goods as well as in society's "knowledge base" consisting of a generally educated labour force and specific expertise in science and technology. Households invest in the development and sustenance of a capable labour force, relying heavily on government investments in education and physical infrastructures.
With government and household investments as essential foundations, businesses invest in processes of production and distribution to transform physical and human inputs into goods and services that customers want to buy at prices that they are willing (or can afford) to pay.
Prosperous economies are ones in which investments by governments, households, and businesses – or what I call the "investment triad" – reinforce one another in building innovative capabilities.
Of course, investments for tomorrow require access to financial resources today. Governments need taxes, households need wages, and businesses need profits. Each actor in the triad can leverage this internal finance by taking on external debt. Ultimately, however, it is internal finance – taxes, wages, and profits – that must be sufficient to both service the debt and invest for the future if the prosperity of the economy is to be sustained.
So what is the weak link in the UK investment triad? Conservatives would have a tough time answering this question because they believe that we can rely on unregulated markets to allocate the economy's resources. The problem is that it is organisations – governments, households and businesses – not markets that invest for the future. A failure of an economy to invest in the productive capabilities that are the bedrock of sustainable prosperity is an organisational failure, not a market failure.
Governments can fail, and there is no doubt that Labour's programme for prosperity must reconsider the effectiveness of the national and local governments in investing in the UK's physical infrastructures and the nation's knowledge base. Households can fail, and there is a pressing need to probe deeply into whether Britain's households have access to the resources and stability required to develop the next generation to be productive members of the labour force.
For rebuilding Britain, however, the real challenge for the Labour party is its approach to business failure. I am not talking primarily about businesses that fail to make profits and possibly go bankrupt. I am referring to some of the nation's largest and most profitable businesses that pay out far too much to shareholders instead of investing for the future. For the period 2001-2010, 86 of Britain's largest companies that are included in the S&P Europe 350 Index made €882bn in net profits of which 63% was paid out in dividends and another 26% to buy back their own shares.
This development, the financialisation of British business corporations, has them hot on the heels of their American counterparts. For the decade 2001-2010, 459 companies in the S&P 500 Index, almost all of which are US-based, expended $1.9 trillion, or 40% of net income, on dividends, and $2.7 tn, or 54% of net income, on share buybacks, leaving only 6% of profits to potentially be invested for the future.
In the United States, I have called for a ban on share buybacks in particular and a major reform of corporate governance in general. Neither US political party shows any real interest in heeding the call. The American people are paying the price for this inaction, and will continue to do so long into the future. In the UK, the Labour party has the opportunity to take investment for prosperity seriously. Rather than emulate the declining US economy, Labour can formulate a new "Anglo-Saxon" model to rebuild Britain that recognises how governments, households, and businesses can work together as an investment triad that forms the foundation for equitable and stable economic growth.
Corporations are not working for the 99%. But this wasn’t always the case. In a special 5-part AlterNet series, William Lazonick, professor at UMass, president of the Academic-Industry Research Network, and one of the leading expert on the American corporation, along with journalist Ken Jacobson and AlterNet’s Lynn Parramore, will examine the foundations, history, and purpose of the corporation to answer this vital question: How can the public take control of the business corporation and make it work for the real economy?
The wealth of the American nation depends on the productive power of our major business corporations. In 2008 there were 981 companies in the United States with 10,000 or more employees. Although they were less than two percent of all U.S. firms, they employed 27 percent of the labor force and accounted for 31 percent of all payrolls. Literally millions of smaller businesses depend, directly or indirectly, on the productivity of these big businesses and the disposable incomes of their employees.
When the executives who control big-business investment decisions place a high priority on innovation and job creation, then we all have a chance for a prosperous tomorrow. Unfortunately, over the past few decades, the top executives of our major corporations have turned the productive power of the people into massive and concentrated financial wealth for themselves. Indeed the very emergence of “the 1%” is largely the result of this usurpation of corporate power. And executives’ use of this power to benefit themselves often undermines investment in innovation and job creation.
These corporations do not belong to them. They belong to us. We need to confront some powerful myths of corporate governance as part of a movement to make corporations work for the 99%. To start, we have to recognize these corporations for what they are not.
• They are not “private enterprise.”
• They should not be run to “maximize shareholder value.”
• The mega-millions in remuneration paid to top corporate executives are not determined by the “market forces” of supply and demand.
Let’s take a closer look at each of these myths.
1. Public corporations are not private enterprise.
Here’s something you’ll rarely hear stated by today’s politicians and pundits: Publicly listed and traded corporations are not private enterprise. As documented by the pre-eminent business historian Alfred D. Chandler, Jr., in a book aptly called The Visible Hand, about 100 years ago the managerial revolution in American business placed salaried managers in charge of running the nation’s largest and most productive business corporations.
This was a peaceful revolution in which a generation of owner-entrepreneurs who had founded these companies some decades earlier used initial public offerings on the New York Stock Exchange to sell their ownership stakes to the public, leaving decision-making power in the hands of salaried managers. In effect, these corporate employees, and the boards of directors whom they selected, became trustees of the immense productive power that these corporations had accumulated.
Even when founders of companies that evolve into major public corporations become their CEOs, they generally occupy the top positions as corporate employees, not owners. For example, when the late Steve Jobs returned to Apple Computer in 1997, 11 years after being denied the CEO position of the company he had founded, his ascent to the top position was as a manager, not on owner. When a company founder like Larry Page of Google gives up private ownership by publicly selling shares, he may become CEO of the new corporation, but he is occupying this position as a hired hand, not as a private entrepreneur.
In other words, private owners make choices to transform a private enterprise into a public company that then needs to be regulated as such. There are other choices that could have been made. When the retiring owner of a private company wants to pass on control over a prosperous company to his or her employees, an alternative to the public corporation is to establish an Employee Stock Ownership Plan, or ESOP. There are many successful companies in the U.S. that are not public corporations precisely because they are under the collective ownership of their employees.
It is also possible for some investors to agglomerate sufficient shares to take a public company private (Mitt Romney made his millions doing just that), but that only emphasizes the point: public corporations are not private enterprise. We regulate public corporations far more stringently than private businesses precisely because they are publicly held. And as U.S. citizens, how we regulate public corporations (or even private businesses, for that matter) is up to us.
2. Corporations should be run to benefit everyone who contributes to their success - not just shareholders.
It's a myth that corporations have a legal duty to maximize profits to shareholders at the expense of everyone else. Historically, the executives and directors of U.S. public corporations understood that they had a responsibility to other constituencies – customers, employees, suppliers, creditors, the communities in which they operate, and the nation.
Today, however, the dominant ideology is that a corporation should “maximize shareholder value.” At the most basic level, the rationale for this ideology is that shareholders own the company’s assets, and therefore have exclusive claim on its profits. A more sophisticated argument is that that among all stakeholders in the business corporation only shareholders bear the risk of getting a positive return from the firm, while all other participants receive guaranteed returns for their productive contributions. If society wants risk-bearing, so the argument goes, firms need to return value to shareholders.
This argument sounds logical – until you question its fundamental assumption. Innovation, defined as the process that generates goods or services that are higher quality and/or lower cost than those previously available, is an inherently uncertain process. Anyone who invests their labor or their capital in the innovation process is taking a risk that the investment may not generate a higher quality, lower cost product. Once you understand the collective and cumulative character of the innovation process, you can easily see that the assumption that shareholders are the only participants in the business enterprise who make investments in productive resources without a guaranteed return is just plain false. In an innovative economy, workers and taxpayers habitually make these risky investments.
How do workers make these risky investments? As is generally recognized by employers who declare that “our most important assets are our human assets”, the key to successful innovation is the extra time and effort, above and beyond the strict requirements of the job, that employees expend interacting with others to confront and solve problems in transforming technologies and accessing markets. Anyone who has spent time in a workplace knows the difference between workers who just punch the clock to collect their pay from day to day and workers who use their paid employment as a platform for the expenditure of creative and collective effort as part of a process of building their careers.
As members of the firm, these forward-looking workers bear the risk that their extra expenditures of time and effort will not yield the gains to innovative enterprise from which they can be rewarded. If, however, the innovation process does generate profits, workers, as risk-bearers, have a claim to a share in the forms of promotions, higher earnings and benefits. Instead, shareholder-value ideology is often used as a rationale for laying off workers whose hard and creative work has contributed to the company’s success. That’s grossly unfair.
Taxpayers also invest in the innovation process without a guaranteed return. Through government agencies, taxpayers fund infrastructural investments that, given their cost and the uncertainty of returns, business enterprises would not have made on their own. It is impossible to explain U.S. leadership in information technology and biotechnology without recognizing the role of government in making investments to develop new knowledge and facilitate its diffusion. As one example, the current annual budget of the National Institutes of Health (http://www.nih.gov/about/budget.htm) is about $31 billion, twice in real terms its level in the early 1990s. Without this government expenditure on research, year in and year out, we would not have a medicinal drug industry. Yet shareholder-value ideology is often used to justify low taxes that deny taxpayers a return on these investments.
So shareholder-value ideology provides a flawed rationale for excluding workers and taxpayers from sharing in the gains of innovative enterprise. To turn this ideology on its head, what risk-bearing role do public shareholders play in the innovation process? Do they confront uncertainty by strategically allocating resources to innovative investments? No. As portfolio investors, they diversify their financial holdings across the outstanding shares of existing firms to minimize risk.
They do so, moreover, with limited liability, which means that they are under no legal obligation to make further investments of “good” money to support previous investments that have gone bad. Even for these previous investments, the existence of a highly liquid stock market enables public shareholders to cut their losses instantaneously by selling their shares – what has long been called the “Wall Street walk”.
3. Executive compensation is a rigged game, not the result of the laws of supply and demand.
You often hear that stratospheric executive pay is the result of some inexorable law of supply and demand. If we don’t give top executives their multimillion dollar compensation, they won’t be willing to come to work and do their jobs. They are supposedly the bearers of “scarce talent” that demands a high price in the market place. Even Robert Reich, Secretary of Labor in the Clinton administration and a critic of U.S. income inequality, has justified the explosion in executive pay, arguing that intense competition makes it much more difficult than it used to be to find the talent who can manage a large corporation (Supercapitalism, 2008, pp 105-114).
That is not what determines executive pay. Here is how it works: Top executives select other top executives to sit on “their” boards of directors. These directors hire compensation consultants to recommend an executive pay package, which consists of salary, bonus, incentive pay, retirement benefits, and all manner of other perks. The consultants look at what top executives at other major corporations are getting, and say that, well, this executive should get more or less the same. Since the directors are mostly these very same “other executives”, they have no interest in objecting – and if any of them were to do so, they would find that they are no longer being invited to sit on corporate boards.
Meanwhile, given the preponderance of stock-based compensation (especially stock options) in executive pay, whenever there is speculative boom in the stock market, top executives of the companies with most rapidly rising stock prices make out like bandits. The higher compensation levels then create a “new normal” for executive pay that, via the compensation consultants and compliant directors, ratchets up the pay of all the top dogs. And, when the stock market is less speculative, these corporate executives do massive stock buybacks to push stock prices up.
What we have here is not “market forces” at work but an exclusive club that promotes the interests of the 0.1%. All too often executives allocate corporate resources to benefit themselves rather than to invest in innovation and job creation. It is time that the 99% see through the ideology, break up the club, and get the U.S. economy back on track.
Corporate power for the people!
Business corporations exist as part of the collective and cumulative development of our economy. The investments in innovation and job creation that these corporations make or decline to make are key to our future prosperity. Public shareholders, the supposed owners of these corporations, are in general only willing to hold shares in a company because of the ease with which they can terminate this relation by selling their shares on the stock market. Yet, almost unanimously, corporate executives proclaim that they run their companies for the sake of shareholders. In fact, their personal coffers pumped up with stock-based compensation, our business “leaders” have increasingly run the corporations for themselves.
The real corporate investors are taxpayers and workers. Through government agencies at federal, state, and local levels, taxpayers supply business corporations with educated labor and physical infrastructure. Through their interaction in business organizations, workers expend the time and effort that can generate innovative products. In the name of shareholder value, however, taxpayers and workers have been losing out. It’s time to confront the myths of “private enterprise”, “shareholder value”, and “market-determined executive compensation” with arguments about how the innovation process actually works with sustainable prosperity as the result.
What will it take to build a movement that can make the business corporation work for the 99%?
We have to elect politicians who will take on corporate power rather than shill for corporate power-brokers. We have to support labor leaders who recognize that gaining a voice in corporate governance is the only way to ensure that corporations will invest in workers and create good jobs. We need teachers at all levels of the education system who understand what business corporations are and what they are not. We need the responsible media to escape from the grip of corporate control. And we have to put in place business executives who represent the interests of civil society rather than those of an elite egotistical club.
- April 25: National Day of Action Against Student Debt
On April 25th, the total amount of student loan debt in the U.S. is due to top 1 trillion dollars. This staggering economic milestone marks a momentous victory for Wall Street and the 1% against two generations of students and families. A day of action will target big banks and student lenders, as well as increasingly corporatized universities.
-May 1st: May Day
Recognized worldwide as International Workers’ Day, May 1st marks the Haymarket Massacre of 1886 in Chicago, where workers were fighting for the eight hour workday. Look for rallies and gatherings across the country that will draw attention to the needs and concerns of workers.
The Move Your Money campaign was launched in 2010 to take on the power of the megabanks that helped cause the financial crisis and continue to wreak havoc on our economy. Numerous ongoing actions around the country are calling attention to the need for fairness and accountability in the banking industry (read about the latest: "Move Your Money" Goes Nationwide As Cities Pull Their Money”)
The leaderless resistance movement continues to take on the greed and corruption of the 1%, including a recent day of action for public transit workers. Check the website for gatherings and actions in your community.
Corporations are not working for the 99 percent. But this wasn’t always the case. In a special five-part series, William Lazonick, professor at UMass, president of the Academic-Industry Research Network, and a leading expert on the business corporation, along with journalist Ken Jacobson and AlterNet’s Lynn Parramore, will examine the foundations, history and purpose of the corporation to answer this vital question: How can the public take control of the business corporation and make it work for the real economy?
While most Americans struggle to make ends meet, the CEOs of major U.S. business corporations are pulling eight-figure, and sometimes even nine-figure, compensation packages. When they win, the 99 percent lose. We rely on these executives to allocate corporate resources to investments in new products and processes that, in a world of global competition, can provide us with good jobs. Yet the ways in which we permit top corporate executives to be paid actually gives them a strong disincentive to invest in innovation and training. The proper function of the executive is to figure out how to develop and use the corporation’s productive capabilities (business schools call it “competitive strategy”). But that's not happening.
In effect, U.S. top executives rake in obscene sums by not doing their jobs.
The Runaway Compensation Train
When all the data from corporate proxy statements are in within the next month or so, they will show that 2011 was another banner year for top executive pay. Over the previous three years the average annual compensation of the top 500 executives named on corporate proxy statements was “only” $17.8 million, compared with an annual average of $27.3 million for 2005 through 2007. Yet even in these recent “down” years, the compensation of these named top executives was more than double in real terms their counterparts’ pay in the years 1992 through 1994.
It might surprise you to learn that in the early 1990s, executive pay was already widely viewed as out of line with what average workers got paid. In 1991 Graef Crystal, a prominent executive pay consultant, published a best-selling book, In Search of Excess: The Overcompensation of American Executives, in which he calculated that over the course of the 1970s and '80s, the real after-tax earnings of the average manufacturing worker had declined by about 13 percent. During the same period, that of the average CEO of a major US corporation had quadrupled! Bill Clinton took up the issue in his 1992 presidential campaign, and immediately upon taking office had Congress pass a law that forbade companies from recording as tax-deductible expenses executive salaries plus bonuses in excess of $1 million.
Unfortunately Clinton chose the wrong pay target. In 1992 salaries and bonuses represented only 23 percent of the total compensation of the top 500 executives named on proxy statements. The largest single component of executive compensation was gains from exercising stock options, representing 59 percent of the total. The Clinton administration left this so-called “performance pay” unregulated.
Perversely, one reaction of corporate boards to the Clinton legislation was to take $1 million in salary plus bonus as the “government-approved minimum wage” for top executives, and therefore to raise these components of executive pay if they fell short of that minimum. The number of named executives with salaries plus bonuses that totaled $1 million or more increased from 529 in 1992 to 703 in 1993 and 922 in 1994.
The other reaction of corporate boards was to lavish more stock options on their top executives. When the stock market boomed in the late 1990s, these executives cashed in. The average annual compensation of the top 500 named executives reached $21 million in 1999 with gains from exercising stock options representing 71 percent of the total, and $32 million in 2000 with option gains now 80 percent of the total.
From 1982 to 2000 the U.S. experienced the longest stock market boom in its history. Average annual stock-price yields of S&P 500 companies were 13 percent in the 1980s and 16 percent in the 1990s. So it didn't require any great genius to make money from stock options. In fact, it became a no-brainer. In 1991, the Securities and Exchange Commission waived the longstanding rule that, as corporate insiders, top executives had to hold stock acquired through exercising their options for six months to prevent “short-swing” profit-taking. As before, executives did not have to put any of their own money at risk in being granted stock options. But now they could also pick the opportune moment to exercise their options without any risk that the value of the company’s stock would subsequently decline before they could sell the stock and lock in the gains.
The New Normal of Corporate Greed
The speculation-fueled “irrational exuberance” of the late 1990s brought unprecedented pay bonanzas to top executives, thus establishing a “new normal” for corporate greed. When boom turned to bust in the early 2000s, money-hungry executives had to look for another way to get stock prices up and make their millions. Their favorite “weapon of value extraction” over the past decade has been the stock buyback (aka stock repurchase). Top executives allocate massive sums of corporate cash to repurchasing their company’s own stock with the purpose of boosting their company’s stock price. Stock buybacks and stock options have become the yin and yang of executive compensation.
Let’s take a look at how it works: The board of directors of Acme Corporation authorizes the CEO to repurchase the company’s own outstanding shares up to a specified value (say $5 billion) over a specified period of time (say three years). On any dates within this three-year period, the CEO then has the authority to instruct the company’s broker to use the company’s cash to buy back shares on the open market up to the $5 billion limit and subject to the SEC rule that the buybacks on any one day can be no more than 25 percent of the company’s average daily trading volume over the previous four weeks. That might permit Acme to do buybacks worth, say, $100 million per day. It may be the end of the quarter, and the CEO and CFO want to meet Wall Street’s expectations for earnings per share. Or they may want to offset a fall in the company’s stock price because of bad news. Or they may want to ensure that the increase in the company’s stock price keeps up with those of competitors, who may also be doing buybacks. Whatever the reason, by the laws of supply and demand, when the corporation spends cash on buybacks, it “manufactures” an increase in its stock price.
Then, with the stock price up, the CEO, CFO and other insiders may choose to cash in their stock options. Presto! They make tons of money for themselves.
Meanwhile, these executives will tend to ignore investments in innovation and training. Some companies actually fund their buybacks by laying off workers, offshoring jobs to low-wage countries, and taking on debt. The top executives’ weapon of value extraction becomes a weapon of value destruction. They are rewarded handsomely by not doing their jobs.
In 1981, 292 major corporations spent less than 3 percent of their combined net income on buybacks. In 1982, however, the SEC passed a rule (10b-18) that gave corporations that did very large-scale stock repurchases a “safe harbor” from charges of stock-price manipulation. Buyback activity then became larger and more widespread, increasing substantially over the course of the 1990s. From 2003 to 2007, buybacks really took off, and by 2007 the very same 292 corporations now spent over 82 percent of their net income repurchasing their own stock.
The financial crisis and the Great Recession forced a slowdown in buybacks. S&P 500 companies repurchased a record $609 billion in 2007 but pared it down to $360 billion in 2008 and $146 billion in 2009. They stepped it back up to about $289 billion in 2010 and an estimated $440 billion in 2011. It is quite possible that buybacks in 2012 will be even higher than in the previous record year of 2007. And look for executive pay to increase as well.
Concentration of Income at the Top
Make no mistake about it. Executive pay is a prime reason why in 2005-2008 the top 0.1 percent captured a record 11.4 percent of all household income (including capital gains) in the U.S., compared with 2.6 percent three decades earlier. In 2010 (the latest Internal Revenue Service data available), this number was 9.5 percent. The income threshold among taxpayers for being included in the 0.1 percent in 2010 was $1,492,175. Of the executives named in proxy statements in 2010, 4,743 had total compensation greater than this threshold amount, with a mean income of $5,034,000 and gains from exercising stock options representing 26 percent of their combined compensation.
Total corporate compensation of the named executives does not include other non-compensation income (from securities, property, fees for sitting on corporate boards, etc.) that would be included in their IRS tax returns. If we assume that named executives whose corporate compensation was below the $1.5 million threshold were able to augment that income by 25 percent from other sources, then the number of named executives in the top 0.1 percent in 2010 would have been 5,555.
Included in the top 0.1 percent of the US income distribution were a large, but unknown, number of US corporate executives whose pay was above the $1.5 million threshold but who were not named in proxy statements because they were neither the CEO nor the four other highest paid in their particular companies. To take just one example, of the five named IBM executives in 2010, the lowest paid had total compensation of $6,637,910. There were presumably large numbers of other IBM executives whose total compensation was between this amount and the $1.5 million top 0.1 percent threshold.
Let’s Put CEOs to Work for Us
Under the Obama administration, virtually nothing has been done to constrain top executive pay. President Obama signaled his unwillingness to take on the issue when, in an interview in February 2010, he was asked about the many millions paid in 2009 to Jamie Dimon, CEO of JPMorgan and Lloyd Blankfein, CEO of Goldman Sachs, in the wake of the financial meltdown and bank bailouts. "I know both those guys; they are very savvy businessmen,” the president said. “I, like most of the American people, don't begrudge people success or wealth. That is part of the free-market system."
The “Say-on-Pay” provision in the 2010 Dodd-Frank Wall Street Reform and Consumer Protection Act sounds good, but it just reinforces a system of incentives the does not work. This provision gives public shareholders the right to express their non-binding opinion to corporate management on issues related to executive compensation. If Congress had understood what drives executive pay in the U.S., however, it would have recognized that the granting of Say-on-Pay rights to public shareholders is part of the problem, not the solution. Through a combination of stock options and stock buybacks, Say-on-Pay provisions reinforce an alignment between the incentives of top executives and the interests of public shareholders that has been undermining investment in America’s future.
It is about time that we took control of exploding executive pay. It is not just that the sums involved are unfair, and as history has shown, will only become more obscene. These executives control the allocation of resources that represent the well-being of the 99 percent, and the ways in which they bank their booty is doing severe damage to the U.S. economy. The investment strategies of business corporations are too important to be left under the control of those who gain when the 99 percent lose.
Corporations are not working for the 99 percent. But this wasn’t always the case. In a special five-part series, William Lazonick, professor at UMass, president of the Academic-Industry Research Network, and a leading expert on the business corporation, along with journalist Ken Jacobson and AlterNet’s Lynn Parramore, will examine the foundations, history and purpose of the corporation to answer this vital question: How can the public take control of the business corporation and make it work for the real economy?
In 2010, the top 500 U.S. corporations – the Fortune 500 – generated $10.7 trillion in sales, reaped a whopping $702 billion in profits, and employed 24.9 million people around the globe. Historically, when these corporations have invested in the productive capabilities of their American employees, we’ve had lots of well-paid and stable jobs.
That was the case a half century ago.
Unfortunately, it’s not the case today. For the past three decades, top executives have been rewarding themselves with mega-million dollar compensation packages while American workers have suffered an unrelenting disappearance of middle-class jobs. Since the 1990s, this hollowing out of the middle-class has even affected people with lots of education and work experience. As the Occupy Wall Street movement has recognized, concentration of income and wealth of the top “1 percent” leaves the rest of us high and dry.
What went wrong? A fundamental transformation in the investment strategies of major U.S. corporations is a big part of the story.
A Look Back
A generation or two ago, corporate leaders considered the interests of their companies to be aligned with those of the broader society. In 1953, at his congressional confirmation hearing to be Secretary of Defense, General Motors CEO Charles E. Wilson was asked whether he would be able to make a decision that conflicted with the interests of his company. His famous reply: “For years I thought what was good for the country was good for General Motors and vice versa.”
Wilson had good reason to think so. In 1956, under the Federal-Aid Highway Act of 1956, the U.S. government committed to pay for 90 percent of the cost of building 41,000 miles of interstate highways. The Eisenhower administration argued that we needed them in case of a military attack (the same justification that would be used in the 1960s for government funding of what would become the Internet). Of course, the interstate highway system also gave businesses and households a fundamental physical infrastructure for civilian purposes– from zipping products around the country to family road trips in the station wagon.
And it was also good for GM. Sales shot up and employment soared. GM's managers, engineers and other male white-collar employees could look forward to careers with one company, along with defined-benefit pensions and health benefits in retirement. GM’s blue-collar employees, represented by the United Auto Workers (UAW), did well, too. In business downturns, such as those of 1958, 1961 and 1970, GM laid off its most junior blue-collar workers, but the UAW paid them supplemental unemployment benefits on top of their unemployment insurance. When business picked up, GM rehired these workers on a seniority basis.
Such opportunities and employment security were typical of most Fortune 500 firms in the 1950s, '60s and '70s. A career with one company was the norm, while mass layoffs simply for the sake of boosting profits were viewed as bad not only for the country, but for the company, too.
What a difference three decades makes! Now mass layoffs to boost profits are the norm, while the expectation of a career with one company is long gone. This transformation happened because the U.S. business corporation has become in a (rather ugly) word “financialized.” It means that executives began to base all their decisions on increasing corporate earnings for the sake of jacking up corporate stock prices. Other concerns -- economic, social and political -- took a backseat. From the 1980s, the talk in boardrooms and business schools changed. Instead of running corporations to create wealth for all, leaders should think only of “maximizing shareholder value.”
When the shareholder-value mantra becomes the main focus, executives concentrate on avoiding taxes for the sake of higher profits, and they don’t think twice about permanently axing workers. They increase distributions of corporate cash to shareholders in the forms of dividends and, even more prominently, stock buybacks. When a corporation becomes financialized, the top executives no longer concern themselves with investing in the productive capabilities of employees, the foundation for rising living standards for all. They become focused instead on generating financial profits that can justify higher stock prices – in large part because, through their stock-based compensation, high stock prices translate into megabucks for these corporate executives themselves. The ideology becomes: Corporations for the 0.1 percent -- and the 99 percent be damned.
The 99 percent needs to understand these fundamental changes in the ways in which top executives have decided to make use of resources if we want U.S. corporations to work for us rather than just for them.
The Financialization Monster
The beginnings of financialization date back to the 1960s when conglomerate titans built empires by gobbling up scores and even hundreds of companies. Business schools justified this concentration of corporate power by teaching that a good manager could manage any type of business -- the bigger the better. But conglomeration often became simply a method of using accounting tricks to boost earnings in the short-run to encourage speculation in the company’s stock price. This focus on short-term financial manipulation often undermined the financial conditions for sustaining higher levels of earnings over the long term. But the interest of stock-market speculators was (as it always is) to capitalize on short-term changes in the market’s evaluation of corporate shares.
When these giant empires imploded in the 1970s and 1980s, people began to see the weakness of the model. By the early 1970s the downgraded debt of conglomerates, known as “fallen angels,” created the opportunity for a young bond trader, Michael Milken, to create a liquid market in high-yield “junk bonds.” By the mid-'80s, Milken (who eventually went to jail for securities fraud) was using his network of financial institutions to back corporate raiders in junk-bond financed leveraged buyouts with the purpose of extracting as much money as possible from a company once it was taken over through layoffs of workers and by breaking up the company to sell it off in pieces.
Wall Street changed the way it made its money. Investment banks turned their focus from supporting long-term corporate investment in productive assets to trading corporate securities in search of higher yields. The great casino was taking form. In 1971, NASDAQ was launched as a national electronic market for generating price quotes on highly speculative stocks. The Employee Retirement Income Security Act of 1974 encouraged corporate pension funds to get into the game since inflation had eroded household savings. In 1975, competition from NASDAQ led the much more conservative New York Stock Exchange, which dated back to 1792, to end fixed commissions on stock transactions. This move only further encouraged stock market speculation by making it less costly for speculators to buy and sell.
In 1980, Robert Hayes and William Abernathy, professors of technology management at Harvard Business School, wrote a widely read article that criticized executives for focusing on short-term profits rather than investments in innovation. But in 1983, two financial economists, Eugene Fama of the University of Chicago and Michael Jensen of the University of Rochester, co-authored two articles in the Journal of Law and Economics which extolled corporate honchos who focused on “maximizing shareholder value” -- by which they meant using corporate resources to boost stock prices, however short the time-frame. In 1985 Jensen landed a higher profile pulpit at Harvard Business School. Soon, shareholder-value ideology became the mantra of thousands of MBA students who were unleashed in the corporate world.
Proponents of the Fama/Jenson view argue that for superior economic performance, corporate resources should be allocated to maximize returns to shareholders because they are the only economic actors who make investments without a guaranteed return. They say that shareholders are the only ones who bear risk in the corporate economy, and so they should also get the rewards. But this argument could not be more false. In fact, lots of people bear risks of investing in the corporation without knowing if they will pay off for them. Governments in the U.S., funded by the body of taxpayers, are constantly making investments in physical infrastructures and human capabilities that provide benefits to businesses, but without a guaranteed return to taxpayers. An employer expects workers to give time and effort beyond that required by their current pay to make a better product and boost profits for the company in the future. Where’s the worker’s guaranteed return? In contrast, most public shareholders simply buy and sell shares of a corporation on the stock market, making no contribution whatsoever to investment in the company’s productive capabilities.
In the name of this misguided philosophy, major U.S. corporations now channel virtually all of their profits to shareholders, not only in the form of dividends, which reward them for holding shares, but even more importantly in the form of stock buybacks, which reward them for selling shares. The sole purpose of stock buybacks is to give a manipulative boost to a company’s stock price. The top executives then benefit when they exercise their typically bountiful stock options and cash in by selling the stock. For 2001-2010, 459 companies in the S&P 500 Index in January 2011 distributed $1.9 trillion in dividends, equivalent to 40 percent of their combined net income, and $2.6 trillion in buybacks, equal to another 54 percent of their net income. After all that, what was left over for investments in innovation, including upgrading the capabilities of their workforces? Not much.
Falling to the Challenge
Big changes in markets and technologies since the 1980s have given U.S. corporations serious competitive challenges. Confronted by Japanese and then Korean competition, companies closed plants, permanently displacing blue-collar workers from what had been middle-class jobs. Meanwhile, the open systems technologies that characterized the microelectronics revolution favored younger workers with the latest computer skills. In the name of shareholder value, by the 1990s U.S. corporations seized on these changes in competition and technology to put an end to the norm of a career with one company, ridding themselves of more expensive older employees in the process. In the 2000s, American corporations found that low-wage nations like China and India possessed millions of qualified college graduates who were able and willing to do high-end work in place of U.S. workers. Offshoring put the nail in the coffin of employment security in corporate America.
In response to these challenges, U.S. corporations could have used their profits to upgrade the capabilities of the U.S. labor force, laying the foundation for a new prosperity. Instead, the same misguided financialized responses have meant big losses for taxpayers and workers while the top 1 percent has gained. Instead of rising to the challenge, they’ve fallen into greed and short-sightedness that chips away at our chances for a prosperous economy.
Yet properly governed, corporations can be run for the 99 percent. In fact, that’s still the case in many successful economies. The truth is that it’s possible to take back the corporations for the 99 percent in the U.S. if we can really wrap our heads around the problem and the solutions. Here are three places to start:
1) Ban It. Ban large established companies from buying back their own stock, and reward them instead for investing in the retention and training of their employees.
2) Link It. Link executive pay to the productive performance of the company, with increases in executive pay being tied to increases for the corporate labor force as a whole.
3) Occupy It. Recognize that taxpayers and workers bear a significant proportion of the risk of corporate investment, and put their representatives on corporate boards where they can have input into the relation between risks and rewards.