
Полная версия:
How Innovation Works
The truth is that twenty-one different people can lay claim to have independently designed or critically improved incandescent light bulbs by the end of the 1870s, mostly independent of each other, and that is not counting those who invented critical technologies that assisted in the manufacture of light bulbs, such as the Sprengel mercury vacuum pump. Swan was the only one whose work was thorough enough and whose patents were good enough to force Edison to go into business with him. The truth is that the story of the light bulb, far from illustrating the importance of the heroic inventor, turns out to tell the opposite story: of innovation as a gradual, incremental, collective yet inescapably inevitable process. The light bulb emerged inexorably from the combined technologies of the day. It was bound to appear when it did, given the progress of other technologies.
Yet Edison, frankly, deserves his reputation, because although he may not have been the first inventor of most of the ingredients of a light bulb, and although the tale of a sudden eureka breakthrough on 22 October 1879 is largely based on retrospective mythmaking, he was none the less the first to bring everything together, to combine it with a system of generating and distributing electricity, and thereby to mount the first workable challenge to the incumbent technologies of the oil lamp and the gas lamp. So much more impressive, all told, than a blinding flash of inspiration, but vanity, vanity: people prefer to be thought brilliant rather than merely hard-working. Edison was also the one who made light bulbs (almost) reliable. Having hubristically claimed to have made a light bulb that would reliably last a long time before failing, he began a frantic search to prove his boast true. This is known today in Silicon Valley as ‘fake it till you make it’. He tested more than 6,000 plant materials in his bid to try to find the ideal material for making a carbon filament. ‘Somewhere in God Almighty’s workshop,’ Edison pleaded, ‘there is a vegetable growth with geometrically powerful fibers suitable to our use.’ On 2 August 1880 Japanese bamboo was the eventual winner, proving capable of lasting more than 1,000 hours.
Thomas Edison understood better than anybody before, and many since, that innovation is itself a product, the manufacturing of which is a team effort requiring trial and error. Starting his career in the telegraph industry and diversifying into stock-ticker machines, he then set up a laboratory in Menlo Park, New Jersey, in 1876, to do what he called ‘the invention business’, later moving to an even bigger outfit in West Orange. He assembled a team of 200 skilled craftsmen and scientists and worked them ruthlessly hard. He waged a long war against his former employee Nikola Tesla’s invention of alternating-current electricity for no better reason than that Tesla had invented it rather than he. Edison’s approach worked: within six years he had registered 400 patents. He remained relentlessly focused on finding out what the world needed and then inventing ways of meeting the needs, rather than the other way around. The method of invention was always trial and error. In developing the nickel-iron battery his employees undertook 50,000 experiments. He stuffed his workshops with every kind of material, tool and book. Invention, he famously said, is 1 per cent inspiration and 99 per cent perspiration. Yet in effect what he was doing was not invention, so much as innovation: turning ideas into practical, reliable and affordable reality.
And yet for all the gradual nature of the innovation of the light bulb, the result was a disruptive and transformational change in the way people lived. Artificial light is one of the greatest gifts of civilization, and it was the light bulb that made it cheap. A minute of work in 1880 on the average wage could earn you four minutes of light from a kerosene lamp; a minute of work in 1950 could earn you more than seven hours of light from an incandescent bulb; in 2000, 120 hours. Artificial light had come within the reach of ordinary people for the first time, banishing the gloom of winter, while expanding the opportunity to read and learn, plus incidentally reducing fire risk. There was no significant down-side to such innovation.
The incandescent bulb reigned supreme for more than a century, being still the dominant form of lighting, at least in domestic settings, well into the first decade of the twenty-first century. When it gave way to a new technology, it did so under duress. That is to say, it had to be banned, because its replacement was so unpopular. The decision by governments all over the world around 2010, lobbied by the makers of compact fluorescent bulbs, to ‘phase out’ incandescents by fiat in the interest of cutting carbon dioxide emissions, proved to be a foolish one. The compact fluorescent replacements took too long to warm up, did not last as long as advertised and were hazardous to dispose of. They were also much more expensive. Their energy-saving did not make up for these drawbacks in most consumers’ eyes, so they had to be forced on to the market. The cost to Britain alone, of this coerced purchase and the subsidy that accompanied it, has been estimated at about £2.75bn.
Worst of all, had governments waited a few more years, they would have found a far better replacement coming along that was even more frugal in energy and had none of the disadvantages: light-emitting diodes, or LEDs. The reign of the compact fluorescents lasted just six years before they too were rapidly abandoned and manufacturers stopped producing them because of the falling cost and rising quality of LEDs. It is as if the government in 1900 had forced people to buy steam cars instead of waiting for better internal-combustion vehicles. The whole compact fluorescent light bulb episode is an object lesson in misinnovation by government. As the economist Don Boudreaux put it: ‘Any legislation forcing Americans to switch from using one type of bulb to another is inevitably the product of a horrid mix of interest-group politics with reckless symbolism designed to placate an electorate that increasingly believes that the sky is falling.’
LED lights have actually been waiting in the wings for a long time. The phenomenon behind them, that semiconductors sometimes glow when conducting electricity, was first observed in 1907 in Britain and first investigated in 1927 in Russia. In 1962 a General Electric scientist named Nick Holonyak stumbled on how to make bright red LEDs of gallium arsenide phosphide, while trying to develop a new kind of laser. Yellow ones soon followed from a Monsanto lab, and by the 1980s LEDs were in watches, traffic lights and circuit boards. But until Shuji Nakamura, working for Nichia in Japan, developed a blue LED using gallium nitride in 1993, it proved impossible to make white light, which kept LED lights from mainstream lighting.
Even then it took twenty years to bring the price of this solid-state lighting down to reasonable levels. Now that has happened, however, the implications are remarkable. LED lights use so little power that a house can be well lit while not on the grid, perhaps using solar panels, a valuable opportunity for remote properties in poor countries. They have put bright flashlights inside smartphones. They emit so little heat that they make indoor ‘vertical’ farming of lettuces and herbs possible on a grand scale, especially using tunable LEDs to produce the wavelengths best suited to photosynthesis.
The ubiquitous turbine
If Newcomen was from humble origins, poor and illiterate in his younger days, the same cannot be said of another key name in the story of steam. Charles Parsons was the sixth son of the wealthy Earl of Rosse, an Irish peer. He was born and raised at Birr Castle in County Offaly, Ireland, and given private tuition in place of school before going up to Cambridge University to read mathematics.
But this was no typical aristocratic household. The earl was an astronomer and engineer. He encouraged his sons to spend time in his workshops rather than libraries. Charles and his brother built a steam engine with which to provide the power for grinding the reflector on his father’s telescope. When he left university it was not for a comfortable berth in the law, politics or finance, but for an apprenticeship in an engineering firm on the Tyne. He proved a brilliant engineer and in 1884 he designed and patented the steam turbine that would prove to be, with very few modifications, the indispensable machine that gave the world electricity and that powered the navies and liners of the sea and later the jets of the air. To this day, it is basically Parsons’s design that keeps the lights on, navies afloat and airliners aloft.
A turbine is a device that spins on its axis. There are two ways to use steam (or water) to make something turn: impulse or reaction. Directing the steam from a fixed nozzle at buckets on a wheel will turn that wheel; and squirting the steam at an angle out of nozzles on the outsides of a wheel itself will also turn the wheel. A spinning sphere driven by steam shooting out of two angled nozzles had been built as a toy by Hero of Alexandria in the first century AD. Parsons concluded early on that impulse turbines were inefficient and stressful to the metal. He realized too that a series of turbines, each turned by some of the steam, would gather more of the energy more efficiently. He redesigned dynamos to generate electricity from turbines and within a few years the first electric grids were being built with larger and larger Parsons turbines.
Parsons set up his own company but had to leave behind the intellectual property in his original designs, and he spent five years trying to build radial-flow turbines before he was able to revert to parallel-axial-flow turbines. He tried and failed to interest the Admiralty in the devices as a way of powering ships. So in 1897 he sprang a cheeky surprise on the Royal Navy.
Parsons, who was fond of boats and yachting, had made a sleek little ship, Turbinia, powered by steam turbines turning a screw propeller. The first results were disappointing, mainly because of the propeller, which caused ‘cavitation’ in the water – small vacuum pockets behind the screw blades that wasted energy. Parsons and Christopher Leyland went back to the laboratory, trying many designs to find one that might solve the cavitation problem. It was trial and error. They stayed up all night at times and were still at the water tank when the housemaids arrived in the morning. It was frustrating work, but by 1897 Parsons had replaced the single radial-flow turbine with three axial-flow ones, and the single propeller shaft with three shafts, each armed with three screws. He knew by now, from sea trials, that his little craft, with nine propellers, could achieve 34 knots, much faster than any ship of the time. He even gave a public talk about it in April 1897, which the Times newspaper reported, concluding dismissively that turbine technology was ‘in a purely experimental, perhaps almost in an embryo stage’ as far as ships were concerned. How wrong they were.
As the Grand Fleet assembled at Spithead on 26 June in the presence of the Prince of Wales, to mark the Diamond Jubilee of Queen Victoria, Parsons was planning an audacious stunt. Over 140 ships were drawn up in four lines over twenty-five miles long in all. Between them steamed a royal procession of ships: Victoria and Albert, carrying the Prince of Wales, the P&O liner Carthage, with other royal guests aboard, Enchantress, with the Lords of the Admiralty, Danube, with members of the House of Lords, Wildfire, with colonial prime ministers, the Cunard liner Campania, with members of the House of Commons, and finally Eldorado, carrying foreign ambassadors. A line of invited foreign battleships included the König Wilhelm with Prince Henry of Prussia aboard.
Defying the rules and evading the fast steam boats on picket duty, Parsons took Turbinia between the ranks of battleships at full speed and then steamed up and down in front of the grandees, pursued in vain by Royal Navy vessels, one of which almost collided with the little greyhound of the sea. It was a sensation. With surprisingly little umbrage – it helped that the Germans were there to witness the episode, and Prince Henry of Prussia took care to send a congratulatory message to Parsons – the Navy took the hint and by 1905 had determined that all future warships would be turbine-powered. HMS Dreadnought was the first. In 1907, the vast liner Mauretania, powered by Parsons turbines, was photographed alongside her little predecessor, Turbinia.
The Spithead moment is in some ways misleading. The history of turbines and electricity is profoundly gradual, not marked by any sudden step changes. Parsons was just one of many people along the path who incrementally devised and improved the machines that made electricity and power. It was an evolution, not a series of revolutions. The key inventions along the way each built upon the previous one and made the next one possible. Alessandro Volta made the first battery in 1800; Humphry Davy made the first arc lamp in 1808; Hans Christian Oersted made the connection between electricity and magnetism in 1820; Michael Faraday and Joseph Henry made the first electric motor in 1820 and its opposite, the first generator, in 1831. Hippolyte Pixii made the first dynamo in 1832; Samuel Varley, Werner von Siemens and Charles Wheatstone all came up with the full dynamo-electric generator in 1867; Zénobe Gramme turned this into a direct-current generator in 1870.
Parsons’s turbine was about 2 per cent efficient at turning the energy of a coal fire into electricity. Today a modern combined-cycle gas turbine is about 60 per cent efficient. A graph of the progress between the two shows a steady improvement with no step changes. By 1910, using waste heat to preheat the water and the air, engineers had improved the efficiency to 15 per cent. By 1940, with pulverized coal, steam reheating and higher temperatures, it was nearer 30 per cent. In the 1960s, as the combined-cycle generator effectively brought a version of the turbojet engine in alongside the steam turbine, potential efficiency had almost doubled again. To single out clever people who made the difference along the way is both difficult and misleading. This was a collaborative effort of many brains. Long after the key technologies had been ‘invented’, innovation continued.
Nuclear power and the phenomenon of disinnovation
The twentieth century saw only one innovative source of energy on any scale: nuclear power. (Wind and solar, though much improved and with a promising future, still supply less than 2 per cent of global energy.) In terms of its energy density, nuclear is without equal: an object the size of a suitcase, suitably plumbed in, can power a town or an aircraft carrier almost indefinitely. The development of civil nuclear power was a triumph of applied science, the trail leading from the discovery of nuclear fission and the chain reaction through the Manhattan Project’s conversion of a theory into a bomb, to the gradual engineering of a controlled nuclear fission reaction and its application to boiling water. No individual stands out in such a story unless it be Leo Szilard’s early realization of the potential of a chain reaction in 1933, General Leslie Groves’s leadership of the Manhattan Project in the 1940s, or Admiral Hyman Rickover’s development of the first nuclear reactors and their adaptation to submarines and aircraft carriers in the 1950s. But as these names illustrate, it was a team effort within the military and state-owned enterprises, plus private contractors, and by the 1960s it had culminated in a huge programme of constructing plants that would use small amounts of enriched uranium to boil enormous amounts of water reliably, continuously and safely all over the world.
Yet today the picture is of an industry in decline, its electrical output shrinking as old plants close faster than new ones open, and an innovation whose time has passed, or a technology that has stalled. This is not for lack of ideas, but for a very different reason: lack of opportunity to experiment. The story of nuclear power is a cautionary tale of how innovation falters, and even goes backwards, if it cannot evolve.
The problem is cost inflation. Nuclear plants have seen their costs relentlessly rising for decades, mostly because of increasing caution about safety. And the industry remains insulated almost entirely from the one known human process that reliably pulls down costs: trial and error. Because error could be so cataclysmic in the case of nuclear power, and because trials are so gigantically costly, nuclear power cannot get trial and error restarted. So we are stuck with an immature and inefficient version of the technology, the pressurized-water reactor, and that is gradually being strangled by the requirements of regulators acting on behalf of worried people reacting to anti-nuclear activists. Also, technologies pushed on the world by governments, before they are really ready, sometimes falter, where they might have done better if allowed to progress a little more slowly. The transcontinental railroads in the United States were all failures, resulting in bankruptcies, except the one privately funded one. One cannot help thinking that nuclear power developed in less of a hurry, and less as a result of a military spin-off, might have done better.
In a book published in 1990, The Nuclear Energy Option, the nuclear physicist Bernard Cohen argued that the reason we stopped building nuclear plants in the 1980s in most of the West was not from fear of accidents, leaks or the proliferation of atomic waste; it was instead the inexorable escalation of costs driven by regulation. His diagnosis has proved even more true since.
This is not for want of ideas for new kinds of nuclear power. There are hundreds of different designs for fission reactors out there in engineers’ PowerPoint presentations, some of which have reached working-prototype design in the past and would have gone further if offered as much financial support as the conventional light-water reactor. Liquid-metal and liquid-salt reactors are two broad categories. The latter would work using salts of thorium or uranium fluoride, probably with other elements included such as lithium, beryllium, zirconium or sodium. The key advantage of such a design is that the fuel comes in liquid form, rather than as a solid rod, so cooling is more even and the removal of waste easier. There is no need to operate at high pressure, reducing the risks. The molten salt is the coolant as well as the fuel and has the neat property that the reaction slows down as it gets hotter, making meltdown impossible. In addition, the design would include a plug that would melt above a certain temperature, draining the fuel into a chamber where it would cease fission, a second safety system. Compared with, say, Chernobyl, this is dramatically safer.
Thorium is more abundant than uranium; it can in effect breed almost indefinitely by creating uranium 233; it can generate almost 100 times as much power from the same quantity of fuel; it does not give rise to fissile plutonium; it generates less waste with a shorter half-life. But although a submarine with sodium coolant was launched in the 1950s and two experimental thorium molten-salt reactors were built in the 1960s in the United States, the project eventually expired as all the money, training and interest focused on the light-water uranium design. Various countries are looking at how to reverse this decision, but none has really taken the plunge.
Even if they did, it seems unlikely that they would achieve the notorious promise made in the 1960s that nuclear power would one day be ‘too cheap to meter’. The problem is simply that nuclear power is a technology ill-suited to the most critical of innovation practices: learning by doing. Because each power station is so big and expensive, it has proved impossible to drive down the cost by experiment. Even changing the design halfway through construction is impossible because of the immense regulatory thicket that each design must pass through before construction. You must design the thing in advance and stick to that design or go back to square one. This way of doing things would fail to bring down costs and raise performance in any technology. It would leave computer chips at the 1960 stage. We build nuclear power stations like Egyptian pyramids, as one-off projects.
Following the Three-Mile Island accident in 1979, and Chernobyl in 1986, activists and the public demanded greater safety standards. They got them. According to one estimate, per unit of power, coal kills nearly 2,000 times as many people as nuclear; bioenergy fifty times; gas forty times; hydro fifteen times; solar five times (people fall off roofs installing panels) and even wind power kills nearly twice as many as nuclear. These numbers include the accidents at Chernobyl and Fukushima. Extra safety requirements have simply turned nuclear power from a very, very safe system into a very, very, very safe system.
Or maybe they have made it less safe. Consider the Fukushima disaster of 2011. The design at Fukushima had huge safety flaws. Its pumps were in a basement easily flooded by a tidal wave, a simple design mistake unlikely to be repeated in a more modern design. It was an old reactor and would have been phased out long since if Japan had still been building new nuclear reactors. The stifling of nuclear expansion and innovation through costly over-regulation had kept Fukushima open past its due date, thus lowering the safety of the system.
The extra safety demanded by regulators has come at high cost. The labour that goes into the construction of a nuclear plant has hugely increased, but mostly in the white-collar jobs, signing off paperwork. According to one study, during the 1970s new regulations increased the quantity of steel per megawatt by 41 per cent, concrete by 27 per cent, piping by 50 per cent and electrical cable by 36 per cent. Indeed, as the ratchet of regulation turned, the projects began to add features to anticipate rule changes that sometimes did not even happen. Crucially this regulatory environment forced the builders of nuclear plants to drop the practice of on-the-spot innovation to solve unanticipated problems, lest it lead to regulatory resets, which further drove up cost.
The answer, of course, is to make nuclear power into a modular system, with small, factory-built reactor units produced off production lines in large quantities and installed like eggs in a crate at the site of each power station. This would drive down costs as it did for the Model T Ford. The problem is that it takes three years to certify a new reactor design, and there is little or no short-cut for a smaller one, so the cost of certification falls more heavily on a smaller design.
Meanwhile, it is now likely that nuclear fusion, the process of releasing energy from the fusion of hydrogen atoms to form helium atoms, may at last fulfil its promise and begin to provide almost unlimited energy within the next few decades. The discovery of so-called high-temperature superconductors and the design of so-called spherical tokamaks have probably at last defused the old joke that fusion power is thirty years away – and has been for thirty years. Fusion may now come to commercial fruition, in the form of many relatively small reactors generating electricity, maybe 400 megawatts each. It is a technology that brings almost no risk of explosion or meltdown, very little in the way of radioactive waste and no worries about providing material for weapons. Its fuel is mainly hydrogen, which it can make with its own electricity from water, so its footprint on the earth will be small. The main problem fusion will still have to solve, as with nuclear fission, is how to drive down the cost by mass production of the reactors, with the ability to redesign from experience along the way so as to learn cost-cutting lessons.
Shale gas surprise