But as market saturation and consumer's tech sophistication increase, successful companies recognize that dominance almost never lasts more than a generation or two (20 years since the dotcom era began...). They prepare to optimize their advantage despite changing circumstances. JL
Edward Tenner reports in Aeon:
The fact of technological transformation from the late 18th century to the early 20th century was that there was scarce low-hanging fruit. Innovation, much less transformation, was arduous and slow. Software ventures are easier to scale up. It is not exhaustion of technological resources but success at attracting capital.
Last spring, I finally visited one of the United States’ industrial shrines: the Thomas Edison National Historical Park in West Orange, New Jersey. Before the rise of Silicon Valley, Thomas Edison was the country’s greatest technological celebrity and, even now, no Silicon Valley billionaire approaches Edison’s portfolio of 1,093 US patents. (For example, Amazon’s founder Jeffrey Bezos is on just 81, Facebook’s Mark Zuckerberg is on 31, and Google’s co-founder Sergey Brin is on 20.)
After touring Edison’s laboratory, shops and a replica of his original Black Maria film studio, I reflected on what Edison might have thought of the iPhone SE and Waze software that had conquered my aversion to the chronic congestion and confused highway grid of northern New Jersey, testaments to the success of Edison’s friend Henry Ford. He would have admired the miniaturisation of today’s smartphones and the software that can give turn-by-turn instructions with no additional charge to the user. Waze is far from perfect; on my return trip, it originally pointed me in the wrong direction at the entrance to the Garden State Parkway. But Edison insisted on how much work was needed to overcome difficulties, so the glitch probably would not have dimmed his admiration for Waze’s parent company, Google (now formally Alphabet).
The main Edison building was a three-story encyclopaedia of all the tools available in his time for marshalling a staff of master tinkerers, including some with the university credentials that Edison himself never acquired. The machinery advanced in stages, from heavy equipment on the ground floor to the most delicate work on the third. It was a machine for designing, prototyping and developing other machines, and Edison’s magnificent office-library-trophy room included a cot for his schedule of frequent naps.
Edison might be the last great self-taught inventor who also brought technological research and design into the 20th century. By building a staff of researchers, some trained in the emerging US academic science programmes, Edison’s laboratory served as a model for the 20th century’s great industrial laboratories, including General Electric, Bell Labs and the chemical and pharmaceutical giants. Edison’s role in power generation, musical recording, motion pictures and other technology remained an inspiration for decades. Yet some influential economists insist that the age of rapid development of transformative inventions, pioneered by Edison, has reached an end.
The first prominent warning was sounded by the US economist Tyler Cowen in The Great Stagnation (2011). With this short book, Cowen proposed that the economic woes of the US reflected the exhaustion of centuries of comparatively easy innovation, which he compared to the ‘low-hanging fruit’ of a cherry orchard. Another US economist, Robert J Gordon, brought a historical perspective to Cowen’s argument that the low-hanging fruit of innovation had already been picked. In his book The Rise and Fall of American Growth (2016), Gordon described a golden age of rising living standards in the century from 1870 (the year after Edison’s first patent) to 1970. He questioned the impact of the web’s lower transaction costs on the quality of life.
Cowen and Gordon both acknowledged the power and market capitalisation of companies using cloud services to bring buyers, sellers and advertisers together – from Amazon and Google to ‘sharing economy’ newcomers such as Uber and Airbnb. Neither of them, however, considered the possibility that it is not exhaustion of limited technological options but these firms’ success in attracting capital that has held up productivity growth. Two other academics, Clayton M Christensen and Derek Van Bever of Harvard Business School, have made a distinction between ‘process innovations’ that speed manufacture of existing products and reduce transaction costs, and ‘market-creating innovations’ that give rise to new industries and employment. Surviving buildings of Edison’s vast factory complex in West Orange, now undergoing redevelopment as loft apartments, still testify to the jobs that Edison’s laboratories created.
Buildings such as Salesforce Tower and the new Apple headquarters in downtown San Francisco offer six-figure salaries for programming talent. They do not offer the kind of well-paid blue-collar jobs of the half-century from 1920-70 that Gordon studied. There is no proof, but it’s worth considering whether the skyrocketing market capitalisation of Silicon Valley’s so-called ‘unicorns’ – corporations that are worth a billion dollars or more but have not yet gone public – are growing at the expense of undercapitalised market-creating innovations. Consider two facts: first, every year magazines such as Scientific American, MIT Technology Review and New Scientist publish lists of ‘breakthrough’ technologies. Yet at the same time, as the historian of military technology David Edgerton has documented in The Shock of the Old (2006), the actual rate of technological change is surprisingly slow.
The stark fact of technological transformation from the late-18th century to the late-20th is that there was scant low-hanging fruit. Innovation, much less transformation, was arduous and slow. In a book that launched a genre, Self-Help (1859), the Scottish physician-journalist Samuel Smiles presented the leitmotif of success as almost superhuman perseverance under difficulties (quoting John Ruskin on patience as ‘the finest and worthiest part of fortitude’). In his bestseller, Smiles inspired readers with tale after tale of craft heroes such as the 16th-century French Protestant potter Bernard Palissy, who burned his household furniture to perfect his glazing technique.
It took decades for the efforts of dozens of inventors to be melded into industriesLike many later motivational works, Self-Help could not avoid survivorship bias; there must have been many equally resilient entrepreneurs with brilliant ideas who simply ran out of chairs – or money – to burn. But it is important to remember that the triumphs of physics, chemistry and engineering that Gordon and other economic and business historians highlight resulted from years of struggle. Most took decades to become consumer staples. They were slow to scale. Often, investors were patient, if only because, even after the rise of US science, there was no investment alternative.
Begin with Edison himself. His real genius was not so much the first commercially practical lightbulb filament but an entire system for the generation, transmission and sale of electrical power. Ultimately, the alternating-current technology developed by George Westinghouse proved more profitable and easier to expand than the direct current that Edison insisted was safer, and Edison’s financial backers merged his company into its rival. Mass electrification needed other inventors as well. While most households in Northeastern industrial states and California had electric power by 1921, every lightbulb took 30 seconds for a skilled glassblower to produce. The modern Corning ribbon machine, which could produce 300 lightbulbs per minute, was not developed until 1926. The electrification of most of the South and West, and the rural Midwest, did not start until the 1930s. Mass electrification outside the big cities took half a century.
Typically, entrepreneurship depended less on the ideas of a single experimenter than on the formation of patent consortia pooling multiple inventors’ ideas. It was the failed actor Isaac Merritt Singer with his legal ace Edward Clark, for example, who launched the sewing machine industry. The former mill mechanic Gordon McKay unleashed the giant United Shoe Machinery Corporation – a company as controversially dynamic in the 1890s as Microsoft would become a century later – by combining shoe-manufacturing patents. (McKay’s immense bequest to Harvard probably funded some of the computer science professors who taught Bill Gates and Zuckerberg.) It took decades for the efforts of dozens of inventors’ proposals to be melded into industries, and more than another 10 years for the ideas of Nikolaus Otto, Gottlieb Daimler, Carl Benz and Wilhelm Maybach to coalesce in the original Benz automobile in 1886. The first high-quality, popularly priced assembly-line automobile, Henry Ford’s Model T, did not appear until 1908.
Mass print media, too, demanded perseverance. At the beginning of the 19th century, paper was still made sheet by sheet. The first patent for the continuous production of paper with a system of moving screens and rollers was granted to Louis Robert in 1799. The English paper manufacturers Henry and Sealy Fourdrinier made technical improvements to this process, but not enough to make it practical, and they were forced to declare bankruptcy after spending £60,000. It was a third party, the engineer Bryan Donkin, who finally made usable continuous papermaking machines based on the first patent.
Even after the technology to mass-produce newspaper, the price of paper depended on the availability of rags and other cotton and linen materials, stretched with a growing list of adulterants. The rise of high-speed presses, daily newspapers and mass elementary education brought more production and lower prices, and a market for a machine that could grind wood into fibre suitable for the Fourdrinier process. The idea for this machine came as early as 1719, from the French scientist René-Antoine Ferchault de Réaumur. It had been patented by the German inventor Friedrich Gottlob Keller in 1845. Yet US newspapers did not begin to use the technology until after the Civil War. In other words, it had taken about 150 years to scale up from concept to mass production. Keller, unable to pay the fee for renewing has patent, became destitute. Despite their perseverance, neither he nor the Fourdriniers made it into Samuel Smiles’s book Self-Help.
Orville and Wilbur Wright better approximate the Smiles model of technological heroes. Using the insights of their bicycle business to master the control of heavier-than-air craft, the Wright brothers flew at Kitty Hawk in 1903, only five years after beginning their experiments. In 1909, the Wrights’ main French rival, Louis Blériot, crossed the English Channel in his monoplane. Yet for all the aircraft exploits of the First World War, the Roaring Twenties and the early Depression, commercial aviation was still not a mainstream alternative to the Pullman sleeper car for long trips. In 1934, a cross-country air journey could require 15 stops, at least one change, and take 25 hours. Scheduled flights were subsidised by Post Office airmail contracts.
The Douglas DC-3 made flying not only enjoyable for passengers but profitable without the airline subsidies. It came more than 30 years after Kitty Hawk. The design of the Douglas DC-3 pointed the way to mass air-transportation of the jet era, but that began only in the 1960s, another 30 years later. Scaling up took time – and subsidies to aircraft producers continued, in the form of military contracts in the US, and civilian support for Airbus in Europe.
State support also proved crucial for one of the most important technological revolutions in modern history, the scaling up of antibiotics in the late 1930s and early ’40s. When the Scottish bacteriologist Alexander Fleming in 1928 discovered that a previously obscure mould was inhibiting growth of the Staphylococcus bacteria he had been cultivating in Petri dishes, the difficulty of purifying Penicillium notatum made his breakthrough of almost no therapeutic value. A decade later, the researchers Howard Florey, Ernst Chain and their colleagues at the University of Oxford were still struggling to develop methods to cultivate and purify enough of the mould to permit clinical testing. Their work was not enough to save their first patient, a policeman who had been infected while pruning roses, who died after what had appeared to be a miraculous recovery.
The urgency of war was the crucial catalyst in realising penicillin’s potentialDespite the immense potential profit and human benefits inherent in scaling up penicillin production, the feat eluded initial efforts. The surface fermentation method at the Oxford laboratory proved disappointing, and the British chemical industry shifted to war work. So Florey and his colleagues turned to the US. Yet it was not the country’s medical schools or academic microbiology departments that took the next step but a laboratory of the US Department of Agriculture in Peoria, Illinois, where a mouldy cantaloupe from a local fruit market turned out to provide the most promising strain of Penicillium notatum yet found.
The urgency of war was the crucial catalyst in realising the drug’s potential. A newly created US agency, the Office of Scientific Research and Development, oversaw accelerating development of military-related innovation, and its Medical Research Committee sprang into action, convincing the skeptical pharmaceutical companies of the US to work together, and (perhaps most importantly) exchange information. In March 1944, Pfizer opened the first facility for commercial production of penicillin in a Brooklyn ice plant: this was 16 years after Fleming’s discovery. Yields at Pfizer and other companies grew exponentially from 21 billion units in 1943 to 1.66 trillion in 1944 to fully 6.8 trillion in 1945. By 1946, limits on civilian prescriptions had been dropped, and penicillin was readily available on prescription.
For a major invention without an obvious military or public-health use, scaling up could take even longer. Chester Carlson, a Caltech graduate and Bell Labs engineer, patented a dry photocopying process in 1938. Frustrated by the need to copy long passages by hand as a part-time law student working in the New York Public Library, he saw an opportunity for making sharp, high-contrast images on ordinary paper instead of the blurry and costly copies turned out by conventional photostat machines.
Carlon’s principle was sound, but the chemical and physical challenges of making an economically viable office machine proved a nightmare. Like the scaling up of penicillin, the origins of the Xerox 914 were an interdisciplinary team effort. Haloid, the company that played a key role, was a small old-line Rochester firm in the shadow of Kodak, not a dominant company like Merck or Pfizer. Yet they overcame many setbacks to make a successful prototype; the high temperature needed to fuse toner to paper could be such a combustion hazard that some of the earliest models had built-in fire extinguishers. In the late 1950s, Arthur D Little, one of America’s most prestigious management consulting firms, made the now-notorious recommendation to its client IBM not to acquire Haloid and its photocopying technology. The market for the machine would be too modest, they advised.
What these and other triumphs of 19th- and 20th-century technological progress share is not a sudden dividend from a scientific insight. Rather they all took painstaking, risky, indirect routes to fruition. Most illustrate what the German-born economist Albert O Hirschman called the ‘hiding hand’, the paradox that if many innovators could foresee what they would have to endure, they would not start, but once committed, they find ways to realise their goals.
Most Silicon Valley start-ups also face daunting odds and fail. But there is a crucial difference between 21st-century Silicon Valley innovation and the physical and chemical champions of earlier centuries. Software-based ventures are far, far easier to scale up than hardware pioneers. What remains hard is that there are not many people who can design and maintain web-based systems that are capable of growing rapidly without breaking down. If there were, Silicon Valley salaries would be much lower and the housing crisis in San Francisco would be less severe. But people who design ultra-efficient algorithms further multiply growing processing power and can achieve dominance that at least appears impossible to challenge. Think of Google’s 22-fold growth in share price from its initial public offering in 2004 to 2017. Rebounding from privacy scandals, Facebook is now worth more than four times the price of its own initial public offering just six years ago, in 2012. As of April 2018, its net income was more than $5 billion.
Google and Facebook, like Apple and Amazon, are at least profitable companies. Even more striking are the market capitalisations of Silicon Valley companies that are still losing money. The transportation app Uber has burned through $10.7 billion in the past nine years: a fact that has not stopped it raising more than $17 billion from investors. Far from creating jobs or speeding up traffic, Uber has been increasing congestion in major cities and growing at the expense of ‘medallion’ or licensed taxi drivers, whose suicides are now a sad staple of city newspapers.
From the perspective of technology and social progress, the problem with Silicon Valley is the opportunity costs of its start-up mentality. I have seen no proof that it has diverted investments from the hard innovations that Silicon Valley also needs; but at the same time, I have rarely been able to spend a full day in New York without needing to recharge my iPhone battery. Its lithium-ion technology is now more than 25 years old. Quantum computing also seems to be just around the corner, and perhaps more resources for research would not hasten its benefits. In the 20th century, wartime emergency often accelerated the solution of the hardest engineering problems: artificial fertilisers, antibiotics, radar, cryptography, atomic energy, jet engines. But for all the alarm over climate change and cybersecurity, there is no comparable broad-based sense of urgency today.
What good are self-driving cars if our roads are filled with potholes?Software and cloud-based platform companies such as Amazon, Google and Facebook can take advantage of the capacity of software to increase the scope of operations rapidly. But an increased scope of operations is itself a modest innovation. Reid Hoffman, a founder of PayPal and LinkedIn, has even coined the word ‘blitzscaling’ for exponential growth. ‘[T]he marginal costs of serving any size market are virtually zero,’ he told the Harvard Business Review in 2016. ‘The more that software becomes integral to all industries, the faster things will move.’
With the fellow tech entrepreneur Chris Yeh, Hoffman has written a book on blitzscaling, subtitled ‘The Lightning-Fast Path to Building Massively Valuable Companies’, due to publish this October. I have not had the chance to read it, but the marketing suggests that the book’s role models will be well-known: Amazon, Uber, Airbnb. Yet it’s questionable whether any of the Silicon Valley champions have created the kind of broad-based prosperity of former industrial centres. And there are social costs beyond those of ride-sharing. Recycling can’t absorb the extra cardboard generated by e-commerce; apartment-sharing apps have increased urban rents. Hoffman has taught a course on blitzscaling at Stanford University, but do 21st-century undergraduates really need more encouragement to get rich quickly?
What can we do? Thinking back to the urgent pressure of the Second World War and how government agencies helped private industry to overcome skepticism and foster cooperation, our first goal should be to study more systematically all those ideas that the technology press has identified as transformative, and to offer better incentives (for example, prizes and preferential tax treatment) for socially beneficial hard technology. What good are self-driving cars if our roads – for want of more durable materials and paving techniques – are filled with potholes that they still can’t detect?
The most promising candidate for a market-creating, difficult breakthrough is the liquid fluoride thorium reactor (LFTR), a form of nuclear energy that avoids the twin perils of weapons proliferation and of meltdowns that have held back conventional nuclear power. The US government largely abandoned research on the design four decades ago, but a Dutch company recently activated an LFTR project, and the Chinese government is said to be optimistic about the technology. Could this be an opportunity like the mass-production of penicillin more than 75 years ago: an exciting principle facing major obstacles that might be conquered by the kind of international government and private cooperation that transformed medicine?
And speaking of antibiotics, their continued overuse, especially in the developing world – though wealthy countries have also failed to cut back as public health experts have long advised – endangers the effectiveness of penicillin and its alternatives too. Superbugs, resistant to multiple antibiotics, now strike terror in hospitals. Vaccine development is also lagging; there is no vaccine for multiple strains of influenza, and still none for the AIDS virus. There are promising ideas for turbocharging existing antibiotics with additional mechanisms to thwart the development of resistance. Is the lure of quick Silicon Valley profits diverting resources that could accelerate bringing them into the medical mainstream? Artificial intelligence could actually be indispensable in this effort, for example by cutting drug-discovery time through robotics. Discovery is likely to remain hard, but fortunately we will also have more powerful tools. As the economist Joel Mokyr put it in 2013, higher fruit just needs taller ladders.
The best things in life are not virtually free and usually can’t move fast. We should think twice before following our blitz
1 comments:
It’s an amazing blog post, and it is really helpful as well.
Post a Comment