Government As Venture Capitalist: The Amazing True History
Continued from previous page
The federal government invested heavily in microchips in the '50s and '60s, for instance - virtually inventing the field of computer science from whole cloth, carving what we now know as Silicon Valley out of the orchards and farms of the South Bay, and guaranteeing the entirety of the market for microchips through most of the 1960s. But it took several more decades before private firms fully understood the commercial potential of information technologies powered by microchips and began to pour resources into commercial applications.
The same has been the case with myriad other technologies. Today's relatively inexpensive jet travel began with Pentagon procurement and R&D for jet turbines in the 1940s and '50s. But it took many decades before jet travel became accessible to the average American, and much of the rest of the world.
The rapid diffusion of general purpose technologies across international borders also make cross-national comparisons of the relationship between public investments in technology and economic growth problematic. The Internet, to take one obvious example, was invented in a government laboratory in the late '60s , and its early applications connecting universities and research laboratories were heavily underwritten by the federal government. But within just a few years of its broad commercialization in the late 1980s and early '90s, it had become the World Wide Web and quickly spread to economies all over the world, which reaped the benefits along with the United States.
Meanwhile, the long-standing conventional wisdom that what drives technology innovation is economic competition between firms for profits and market share is increasingly being undermined by new research. It now appears that the more ruthlessly competitive a given market is, the less innovative it tends to be. More firms competing for less market share mean lower profit for each individual firm — and lower profit means fewer investments in technological innovation.
During America's mid-century economic heyday, a lot of important technological innovation was done by large private firms, which were either outright monopolies or overwhelmingly dominant in their primary markets. With steady profits to reinvest for the long-term, firms like AT&T, Edison Electric and Xerox made enormous investments in potentially game changing technologies and corporate research laboratories like Bell Labs and Xerox Park are justly revered for their many innovations.
But now that the national monopolies and oligopolies of the industrial era have been broken up, private firms tend to invest much less in the high-risk, high-reward technologies that drive economic growth. The chance of failure is too great, the rewards too uncertain, and the technologies too easily copied by competitors. Firms still spend a lot on research and development in the aggregate, but it is mostly spent on incremental product or process innovations, not long-term research to develop new disruptive technologies with the potential to radically transform existing markets and create entirely new ones.
As a result, public funding for technology innovation is now more important than ever. In his 2010 book, "State of Innovation," UC Davis sociologist Fred Block examined R&D Magazine's list of the top 100 annual innovations over four decades, going back to 1970, and found that the percentage of those technologies to which public investment could be traced had grown dramatically. In 2006, 77 out of 88 domestic winners had been at least partially funded by government. And yet, ironically, public investment in technology innovation has been declining, as a percentage of GDP, for 30 years, in no small part because, while we often celebrate failure in the private sector as an essential part of the innovation process, we have increasingly little tolerance for it in the public sector.