скачать книгу бесплатно
Doubting Darwin still
Yet, despite this overwhelming evidence of emergence, the yearning for design still lures millions of people back into doubting Darwin. The American ‘intelligent design’ movement evolved directly from a fundamentalist drive to promote religion within schools, coupled with a devious ‘end run’ to circumvent the USA’s constitutional separation between Church and state. It has largely focused upon the argument from design in order to try to establish that the complex functional arrangements of biology cannot be explained except by God. As Judge John Jones of Pennsylvania wrote in his judgement in the pivotal case of Kitzmiller vs Dover Area School District in 2005, although proponents of intelligent design ‘occasionally suggest that the designer could be a space alien or a time-traveling cell biologist, no serious alternative to God as the designer has been proposed’. Tammy Kitzmiller was one of several Dover parents who objected to her child being taught ‘intelligent design’ on a par with Darwinism. The parents went to court, and got the school district’s law overturned.
In the United States, fundamentalist Christians have challenged Darwinism in schools for more than 150 years. They pushed state legislatures into adopting laws that prohibited state schools from teaching evolution, a trend that culminated in the Scopes ‘monkey trial’ of 1925. The defendant, John Scopes, deliberately taught evolution illegally to bring attention to the state of Tennessee’s anti-evolution law. Prosecuted by William Jennings Bryan and defended by Clarence Darrow, Scopes was found guilty and fined a paltry $100, and even that was overturned on a technicality at appeal. There is a widespread legend that Bryan’s victory was pyrrhic, because it made him look ridiculous and Scopes’s punishment was light. But this is a comforting myth told by saltwater liberals living on the coasts. In the American heartland, Scopes’s conviction emboldened the critics of Darwin greatly. Far from being ridiculed into silence, the fundamentalists gained ground in the aftermath of the Scopes trial, and held that ground for decades within the educational system. Textbooks became very cautious about Darwinism.
It was not until 1968 that the United States Supreme Court struck down all laws preventing the teaching of evolution in schools. Fundamentalists then fell back on teaching ‘creation science’, a concoction of arguments that purported to find scientific evidence for biblical events such as Noah’s flood. In 1987 the Supreme Court effectively proscribed the teaching of creationism on the grounds that it was religion, not science.
It was then that the movement reinvented itself as ‘intelligent design’, focusing on the old Aquinas–Paley argument from design in its simplest form. Creationists promptly rewrote their textbook Of Pandas and People, using an identical definition for intelligent design as had been used for creation science; and systematically replaced the words ‘creationism’ and ‘creationist’ with ‘intelligent design’ in 150 places. This went wrong in one case, resulting in a strange spelling mistake in the book, ‘cdesign proponentsists’, which came to be called the ‘missing link’ between the two movements. This ‘astonishing’ similarity between the two schools of thought was crucial in causing Judge John Jones to deem intelligent design religious rather than scientific as he struck down the Dover School District’s law demanding equal time for intelligent design and evolution in 2005. Intelligent design, according to the textbook Of Pandas and People, argued that species came into existence abruptly, and through an intelligent agency, with their distinctive features already present: fish with fins and scales, birds with feathers.
Jones’s long Opinion in 2005 was a definitive and conclusive demolition of a skyhook, all the more persuasive since it came from a Christian, Bush-appointed, politically conservative judge with no scientific training. Jones pointed out that the scientific revolution had rejected unnatural causes to explain natural phenomena, rejected appeal to authority, and rejected revelation, in favour of empirical evidence. He systematically took apart the evidence presented by Professor Michael Behe, the main scientific champion of intelligent design testifying for the defendants. Behe, in his book Darwin’s Black Box and subsequent papers, had used two main arguments for the existence of an intelligent designer: irreducible complexity and the purposeful arrangement of parts. The flagellum of a bacterium, he argued, is driven by a molecular rotary motor of great complexity. Remove any part of that system and it will not work. The blood-clotting system of mammals likewise consists of a cascade of evolutionary events, none of which makes sense without the others. And the immune system was not only inexplicably complex, but a natural explanation was impossible.
It was trivial work for evolution’s champions, such as Kenneth Miller, to dismantle these cases in the Dover trial to the satisfaction of the judge. A fully functional precursor of the bacterial flagellum with a different job, known as the Type III secretory system, exists in some organisms and could easily have been adapted to make a rotary motor while still partly retaining its original advantageous role. (In the same way, the middle-ear bones of mammals, now used for hearing, are direct descendants of bones that were once part of the jaw of early fish.) The blood-clotting cascade is missing one step in whales and dolphins, or three steps in puffer fish, and still works fine. And the immune system’s mysterious complexity is yielding bit by bit to naturalistic explanations; what’s left no more implicates an intelligent designer, or a time-travelling genetic engineer, than it does natural selection. At the trial Professor Behe was presented with fifty-eight peer-reviewed papers and nine books about the evolution of the immune system.
As for the purposeful arrangement of parts, Judge Jones did not mince his words: ‘This inference to design based upon the appearance of a “purposeful arrangement of parts” is a completely subjective proposition, determined in the eye of each beholder and his/her viewpoint concerning the complexity of a system.’ Which is really the last word on Newton, Paley, Behe, and for that matter Aquinas.
More than 2,000 years ago Epicureans like Lucretius seem to have cottoned on to the power of natural selection, an idea that they probably got from the flamboyant Sicilian philosopher Empedocles (whose verse style was also a model for Lucretius), born in around 490 BC. Empedocles talked of animals that survived ‘being organised spontaneously in a fitting way; whereas those which grew otherwise perished and continue to perish’. It was, had Empedocles only known it, probably the best idea he ever had, though he never seems to have followed it through. Darwin was rediscovering an idea.
Gould’s swerve
Why was it even necessary, nearly 150 years after Darwin set out his theory, for Judge Jones to make the case again? This remarkable persistence of resistance to the idea of evolution, packaged and repackaged as natural theology, then creation science, then intelligent design, has never been satisfactorily explained. Biblical literalism cannot fully justify why people so dislike the idea of spontaneous biological complexity. After all, Muslims have no truck with the idea that the earth is 6,000 years old, yet they too find the argument from design persuasive. Probably fewer than 20 per cent of people in most Muslim-dominated countries accept Darwinian evolution to be true. Adnan Oktar, for example, a polemical Turkish creationist who also uses the name Harun Yahya, employs the argument from design to ‘prove’ that Allah created living things. Defining design as ‘a harmonious assembling of various parts into an orderly form towards a common goal’, he then argues that birds show evidence of design, their hollowed bones, strong muscles and feathers making it ‘obvious that the bird is product of a certain design’. Such a fit between form and function, however, is very much part of the Darwinian argument too.
Secular people, too, often jib at accepting the idea that complex organs and bodies can emerge without a plan. In the late 1970s a debate within Darwinism, between a predominantly American school led by the fossil expert Stephen Jay Gould and a predominantly British one led by the behaviour expert Richard Dawkins, about the pervasiveness of adaptation, led to some bitter and high-profile exchanges. Dawkins thought that almost every feature of a modern organism had probably been subject to selection for a function, whereas Gould thought that lots of change happened for accidental reasons. By the end, Gould seemed to have persuaded many lay people that Darwinism had gone too far; that it was claiming a fit between form and function too often and too glibly; that the idea of the organism adapting to its environment through natural selection had been refuted or at least diminished. In the media, this fed what John Maynard Smith called ‘a strong wish to believe that the Darwinian theory is false’, and culminated in an editorial in the Guardian announcing the death of Darwinism.
Within evolutionary biology, however, Gould lost the argument. Asking what an organ had evolved to do continued to be the main means by which biologists interpreted anatomy, biochemistry and behaviour. Dinosaurs may have been large ‘to’ achieve stable temperatures and escape predation, while nightingales may sing ‘to’ attract females.
This is not the place to retell the story of that debate, which had many twists and turns, from the spandrels of the Cathedral of San Marco in Venice to the partial resemblance of a caterpillar to a bird’s turd. My purpose here is different – to discern the motivation of Gould’s attack on adaptationism and its extraordinary popularity outside science. It was Gould’s Lucretian swerve. Daniel Dennett, Darwin’s foremost philosopher, thought Gould was ‘following in a long tradition of eminent thinkers who have been seeking skyhooks – and coming up with cranes’, and saw his antipathy to ‘Darwin’s dangerous idea as fundamentally a desire to protect or restore the Mind-first, top–down vision of John Locke’.
Whether this interpretation is fair or not, the problem Darwin and his followers have is that the world is replete with examples of deliberate design, from watches to governments. Some of them even involve design: the many different breeds of pigeons that Darwin so admired, from tumblers to fantails, were all produced by ‘mind-first’ selective breeding, just like natural selection but at least semi-deliberate and intentional. Darwin’s reliance on pigeon breeding to tell the tale of natural selection was fraught with danger – for his analogy was indeed a form of intelligent design.
Wallace’s swerve
Again and again, Darwin’s followers would go only so far, before swerving. Alfred Russel Wallace, for instance, co-discovered natural selection and was in many ways an even more radical enthusiast for Darwinism (a word he coined) than Darwin himself. Wallace was not afraid to include human beings within natural selection very early on; and he was almost alone in defending natural selection as the main mechanism of evolution in the 1880s, when it was sharply out of fashion. But then he executed a Lucretian swerve. Saying that ‘the Brain of the Savage [had been] shown to be Larger than he Needs it to be’ for survival, he concluded that ‘a superior intelligence has guided the development of man in a definite direction, and for a special purpose’. To which Darwin replied, chidingly, in a letter: ‘I hope you have not murdered too completely your own and my child.’
Later, in a book published in 1889 that resolutely champions Darwinism (the title of the book), Wallace ends by executing a sudden U-turn, just like Hume and so many others. Having demolished skyhook after skyhook, he suddenly erects three at the close. The origin of life, he declares, is impossible to explain without a mysterious force. It is ‘altogether preposterous’ to argue that consciousness in animals could be an emergent consequence of complexity. And mankind’s ‘most characteristic and noble faculties could not possibly have been developed by means of the same laws which have determined the progressive development of the organic world in general’. Wallace, who was by now a fervent spiritualist, demanded three skyhooks to explain life, consciousness and human mental achievements. These three stages of progress pointed, he said, to an unseen universe, ‘a world of spirit, to which the world of matter is altogether subordinate’.
The lure of Lamarck
The repeated revival of Lamarckian ideas to this day likewise speaks of a yearning to reintroduce mind-first intentionality into Darwinism. Jean-Baptiste de Lamarck suggested long before Darwin that creatures might inherit acquired characteristics – so a blacksmith’s son would inherit his father’s powerful forearms even though these were acquired by exercise, not inheritance. Yet people obviously do not inherit mutilations from their parents, such as amputated limbs, so for Lamarck to be right there would have to be some kind of intelligence inside the body deciding what was worth passing on and what was not. But you can see the appeal of such a scheme to those left disoriented by the departure of God the designer from the Darwinised scene. Towards the end of his life, even Darwin flirted with some tenets of Lamarckism as he struggled to understand heredity.
At the end of the nineteenth century, the German biologist August Weismann pointed out a huge problem with Lamarckism: the separation of germ-line cells (the ones that end up being eggs or sperm) from other body cells early in the life of an animal makes it virtually impossible for information to feed back from what happens to a body during its life into its very recipe. Since the germ cells were not an organism in microcosm, the message telling them to adopt an acquired character must, Weismann argued, be of an entirely different nature from the change itself. Changing a cake after it has been baked cannot alter the recipe that was used.
The Lamarckians did not give up, though. In the 1920s a herpetologist named Paul Kammerer in Vienna claimed to have changed the biology of midwife toads by changing their environment. The evidence was flaky at best, and wishfully interpreted. When accused of fraud, Kammerer killed himself. A posthumous attempt by the writer Arthur Koestler to make Kammerer into a martyr to the truth only reinforced the desperation so many non-scientists felt to rescue a top–down explanation of evolution.
It is still going on. Epigenetics is a respectable branch of genetic science that examines how modifications to DNA sequences acquired early in life in response to experience can affect the adult body. There is a much more speculative version of the story, though. Most of these modifications are swept clean when the sperm and egg cells are made, but perhaps a few just might survive the jump into a new generation. Certain genetic disorders, for example, seem to manifest themselves differently according to whether the mutant chromosome was inherited from the mother or the father – implying a sex-specific ‘imprint’ on the gene. And one study seemed to find a sex-specific effect on the mortality of Swedes according to how hungry their grandparents were when young. From a small number of such cases, none with very powerful results, some modern Lamarckians began to make extravagant claims for the vindication of the eighteenth-century French aristocrat. ‘Darwinian evolution can include Lamarckian processes,’ wrote Eva Jablonka and Marion Lamb in 2005, ‘because the heritable variation on which selection acts is not entirely blind to function; some of it is induced or “acquired” in response to the conditions of life.’
But the evidence for these claims remains weak. All the data suggest that the epigenetic state of DNA is reset in each generation, and that even if this fails to happen, the amount of information imparted by epigenetic modifications is a minuscule fraction of the information imparted by genetic information. Besides, ingenious experiments with mice show that all the information required to reset the epigenetic modifications themselves actually lies in the genetic sequence. So the epigenetic mechanisms must themselves have evolved by good old Darwinian random mutation and selection. In effect, there is no escape to intentionality to be found here. Yet the motive behind the longing to believe in epigenetic Lamarckism is clear. As David Haig of Harvard puts it, ‘Jablonka and Lamb’s frustration with neo-Darwinism is with the pre-eminence that is ascribed to undirected, random sources of heritable variation.’ He says he is ‘yet to hear a coherent explanation of how the inheritance of acquired characters can, by itself, be a source of intentionality’. In other words, even if you could prove some Lamarckism in epigenetics, it would not remove the randomness.
Culture-driven genetic evolution
In fact, there is a way for acquired characteristics to come to be incorporated into genetic inheritance, but it takes many generations and it is blindly Darwinian. It goes by the name of the Baldwin effect. A species that over many generations repeatedly exposes itself to some experience will eventually find its offspring selected for a genetic predisposition to cope with that experience. Why? Because the offspring that by chance happen to start with a predisposition to cope with that circumstance will survive better than others. The genes can thereby come to embody the experience of the past. Something that was once learned can become an instinct.
A similar though not identical phenomenon is illustrated by the ability to digest lactose sugar in milk, which many people with ancestors from western Europe and eastern Africa possess. Few adult mammals can digest lactose, since milk is not generally drunk after infancy. In two parts of the world, however, human beings evolved the capacity to retain lactose digestion into adulthood by not switching off genes for lactase enzymes. These happened to be the two regions where the domestication of cattle for milk production was first invented. What a happy coincidence! Because people could digest lactose, they were able to invent dairy farming? Well no, the genetic switch plainly happened as a consequence, not a cause, of the invention of dairy farming. But it still had to happen through random mutation followed by non-random survival. Those born by chance with the mutation that caused persistence of lactose digestion tended to be stronger and healthier than their siblings and rivals who could digest less of the goodness in milk. So they thrived, and the gene for lactose digestion spread rapidly. On closer inspection, this incorporation of ancestral experience into the genes is all crane and no skyhook.
So incredible is the complexity of the living world, so counterintuitive is the idea of boot-strapped, spontaneous intricacy, that even the most convinced Darwinian must, in the lonely hours of the night, have moments of doubt. Like Screwtape the devil whispering in the ear of a believer, the ‘argument from personal incredulity’ (as Richard Dawkins calls it) can be very tempting, even if you remind yourself that it’s a massive non sequitur to find divinity in ignorance.
4 (#ulink_58321b8b-6c1d-5e1e-81a6-4110341c970a)
The Evolution of Genes (#ulink_58321b8b-6c1d-5e1e-81a6-4110341c970a)
For certainly the elements of things do not collect
And order their formations by their cunning intellect,
Nor are their motions something they agree upon or propose;
But being myriad and many-mingled, plagued by blows
And buffeted through the universe for all time past,
By trying every motion and combination, they at last,
Fell into the present form in which the universe appears.
Lucretius, De Rerum Natura, Book 1, lines 1021–7
An especially seductive chunk of current ignorance is that concerning the origin of life. For all the confidence with which biologists trace the emergence of complex organs and organisms from simple proto-cells, the first emergence of those proto-cells is still shrouded in darkness. And where people are baffled, they are often tempted to resort to mystical explanations. When the molecular biologist Francis Crick, that most materialist of scientists, started speculating about ‘panspermia’ in the 1970s – the idea that life perhaps originated elsewhere in the universe and got here by microbial seeding – many feared that he was turning a little mystical. In fact he was just making an argument about probability: that it was highly likely, given the youth of the earth relative to the age of the universe, that some other planet got life before us and infected other solar systems. Still, he was emphasising the impenetrability of the problem.
Life consists of the capacity to reverse the drift towards entropy and disorder, at least locally – to use information to make local order from chaos while expending energy. Essential to these three skills are three kinds of molecule in particular – DNA for storing information, protein for making order, and ATP as the medium of energy exchange. How these came together is a chicken-and-egg problem. DNA cannot be made without proteins, nor proteins without DNA. As for energy, a bacterium uses up fifty times its own body weight in ATP molecules in each generation. Early life must have been even more profligate, yet would have had none of the modern molecular machinery for harnessing and storing energy. Wherever did it find enough ATP?
The crane that seems to have put these three in place was probably RNA, a molecule that still plays many key roles in the cell, and that can both store information like DNA, and catalyse reactions like proteins do. Moreover, RNA is made of units of base, phosphate and ribose sugar, just as ATP is. So the prevailing theory holds that there was once an ‘RNA World’, in which living things had RNA bodies with RNA genes, using RNA ingredients as an energy currency. The problem is that even this system is so fiendishly complex and interdependent that it’s hard to imagine it coming into being from scratch. How, for example, would it have avoided dissipation: kept together its ingredients and concentrated its energy without the boundaries provided by a cell membrane? In the ‘warm little pond’ that Charles Darwin envisaged for the beginning of life, life would have dissolved away all too easily.
Don’t give up. Until recently the origin of the RNA World seemed so difficult a problem that it gave hope to mystics; John Horgan wrote an article in Scientific American in 2011 entitled ‘Psst! Don’t Tell the Creationists, But Scientists Don’t Have a Clue How Life Began’.
Yet today, just a few years later, there’s the glimmer of a solution. DNA sequences show that at the very root of life’s family tree are simple cells that do not burn carbohydrates like the rest of us, but effectively charge their electrochemical batteries by converting carbon dioxide into methane or the organic compound acetate. If you want to find a chemical environment that echoes the one these chemi-osmotic microbes have within their cells, look no further than the bottom of the Atlantic Ocean. In the year 2000, explorers found hydrothermal vents on the mid-Atlantic ridge that were quite unlike those they knew from other geothermal spots on the ocean floor. Instead of very hot, acidic fluids, as are found at ‘black-smoker’ vents, the new vents – known as the Lost City Hydrothermal Field – are only warm, are highly alkaline, and appear to last for tens of thousands of years. Two scientists, Nick Lane and William Martin, have begun to list the similarities between these vents and the insides of chemi-osmotic cells, finding uncanny echoes of life’s method of storing energy. Basically, cells store energy by pumping electrically charged particles, usually sodium or hydrogen ions, across membranes, effectively creating an electrical voltage. This is a ubiquitous and peculiar feature of living creatures, but it appears it might have been borrowed from vents like those at Lost City.
Four billion years ago the ocean was acidic, saturated with carbon dioxide. Where the alkaline fluid from the vents met the acidic water, there was a steep proton gradient across the thin iron-nickel-sulphur walls of the pores that formed at the vents. That gradient had a voltage very similar in magnitude to the one in a modern cell. Inside those mineral pores, chemicals would have been trapped in a space with abundant energy, which could have been used to build more complex molecules. These in turn – as they began to accidentally replicate themselves using the energy from the proton gradients – became gradually more susceptible to a pattern of survival of the fittest. And the rest, as Daniel Dennett would say, is algorithm. In short, an emergent account of the origin of life is almost within reach.
All crane and no skyhook
As I mentioned earlier, the diagnostic feature of life is that it captures energy to create order. This is also a hallmark of civilisation. Just as each person uses energy to make buildings and devices and ideas, so each gene uses energy to make a structure of protein. A bacterium is limited in how large it can grow by the quantity of energy available to each gene. That’s because the energy is captured at the cell membrane by pumping protons across the membrane, and the bigger the cell, the smaller its surface area relative to its volume. The only bacteria that grow big enough to be seen by the naked eye are ones that have huge empty vacuoles inside them.
However, at some point around two billion years after life started, huge cells began to appear with complicated internal structures; we call them eukaryotes, and we (animals as well as plants, fungi and protozoa) are them.
Nick Lane argues that the eukaryotic (r)evolution was made possible by a merger: a bunch of bacteria began to live inside an archeal cell (a different kind of microbe). Today the descendants of these bacteria are known as mitochondria, and they generate the energy we need to live. During every second of your life your thousand trillion mitochondria pump a billion trillion protons across their membranes, capturing the electrical energy needed to forge your proteins, DNA and other macromolecules.
Mitochondria still have their own genes, but only a small number – thirteen in us. This simplification of their genome was vital. It enabled them to generate far more surplus energy to support the work of ‘our genome’, which is what enables us to have complex cells, complex tissues and complex bodies. As a result we eukaryotes have tens of thousands of times more energy available per gene, making each of our genes capable of far greater productivity. That allows us to have larger cells as well as more complex structures. In effect, we overcame the size limit of the bacterial cell by hosting multiple internal membranes in mitochondria, and then simplifying the genomes needed to support those membranes.
There is an uncanny echo of this in the Industrial (R)evolution. In agrarian societies, a family could grow just enough food to feed itself, but there was little left over to support anybody else. So only very few people could have castles, or velvet coats, or suits of armour, or whatever else needed making with surplus energy. The harnessing of oxen, horses, wind and water helped generate a bit more surplus energy, but not much. Wood was no use – it provided heat, not work. So there was a permanent limit on how much a society could make in the way of capital – structures and things.
Then in the Industrial (R)evolution an almost inexhaustible supply of energy was harnessed in the form of coal. Coal miners, unlike peasant farmers, produced vastly more energy than they consumed. And the more they dug out, the better they got at it. With the first steam engines, the barrier between heat and work was breached, so that coal’s energy could now amplify the work of people. Suddenly, just as the eukaryotic (r)evolution vastly increased the amount of energy per gene, so the Industrial (R)evolution vastly increased the amount of energy per worker. And that surplus energy, so the energy economist John Constable argues, is what built (and still builds) the houses, machines, software and gadgets – the capital – with which we enrich our lives. Surplus energy is indispensable to modern society, and is the symptom of wealth. An American consumes about ten times as much energy as a Nigerian, which is the same as saying he is ten times richer. ‘With coal almost any feat is possible or easy,’ wrote William Stanley Jevons; ‘without it we are thrown back into the laborious poverty of early times.’ Both the evolution of surplus energy generation by eukaryotes, and the evolution of surplus energy by industrialisation, were emergent, unplanned phenomena.
But I digress. Back to genomes. A genome is a digital computer program of immense complexity. The slightest mistake would alter the pattern, dose or sequence of expression of its 20,000 genes (in human beings), or affect the interaction of its hundreds of thousands of control sequences that switch genes on and off, and result in disastrous deformity or a collapse into illness. In most of us, for an incredible eight or nine decades, the computer program runs smoothly with barely a glitch.
Consider what must happen every second in your body to keep the show on the road. You have maybe ten trillion cells, not counting the bacteria that make up a large part of your body. Each of those cells is at any one time transcribing several thousand genes, a procedure that involves several hundred proteins coming together in a specific way and catalysing tens of chemical reactions for each of millions of base pairs. Each of those transcripts generates a protein molecule, thousands of amino acids long, which it does by entering a ribosome, a machine with tens of moving parts, capable of catalysing a flurry of chemical reactions. The proteins themselves then fan out within and without cells to speed reactions, transport goods, transmit signals and prop up structures. Millions of trillions of these immensely complicated events are occurring every second in your body to keep you alive, very few of which go wrong. It’s like the world economy in miniature, only even more complex.
It is hard to shake the illusion that for such a computer to run such a program, there must be a programmer. Geneticists in the early days of the Human Genome Project would talk of ‘master genes’ that commanded subordinate sequences. Yet no such master gene exists, let alone an intelligent programmer. The entire thing not only emerged piece by piece through evolution, but runs in a democratic fashion too. Each gene plays its little role; no gene comprehends the whole plan. Yet from this multitude of precise interactions results a spontaneous design of unmatched complexity and order. There was never a better illustration of the validity of the Enlightenment dream – that order can emerge where nobody is in charge. The genome, now sequenced, stands as emphatic evidence that there can be order and complexity without any management.
On whose behalf?
Let’s assume for the sake of argument that I have persuaded you that evolution is not directed from above, but is a self-organising process that produces what Daniel Dennett calls ‘free-floating rationales’ for things. That is to say, for example, a baby cuckoo pushes the eggs of its host from the nest in order that it can monopolise its foster parents’ efforts to feed it, but nowhere has that rationale ever existed as a thought either in the mind of the cuckoo or in the mind of a cuckoo’s designer. It exists now in your mind and mine, but only after the fact. Bodies and behaviours teem with apparently purposeful function that was never foreseen or planned. You will surely agree that this model can apply within the human genome, too; your blood-clotting genes are there to make blood-clotting proteins, the better to clot blood at a wound; but that functional design does not imply an intelligent designer who foresaw the need for blood clotting.
I’m now going to tell you that you have not gone far enough. God is not the only skyhook. Even the most atheistic scientist, confronted with facts about the genome, is tempted into command-and-control thinking. Here’s one, right away: the idea that genes are recipes patiently waiting to be used by the cook that is the body. The collective needs of the whole organism are what the genes are there to serve, and they are willing slaves. You find this assumption behind almost any description of genetics – including ones by me – yet it is misleading. For it is just as truthful to turn the image upside down. The body is the plaything and battleground of genes at least as much as it is their purpose. Whenever somebody asks what a certain gene is for, they automatically assume that the question relates to the needs of the body: what is it for, in terms of the body’s needs? But there are plenty of times when the answer to that question is ‘The gene itself.’
The scientist who first saw this is Richard Dawkins. Long before he became well known for his atheism, Dawkins was famous for the ideas set out in his book The Selfish Gene. ‘We are survival machines – robot vehicles blindly programmed to preserve the selfish molecules known as genes,’ he wrote. ‘This is a truth that still fills me with astonishment.’ He was saying that the only way to understand organisms was to see them as mortal and temporary vehicles used to perpetuate effectively immortal digital sequences written in DNA. A male deer risks its life in a battle with another stag, or a female deer exhausts her reserves of calcium producing milk for her young, not to help its own body’s survival but to pass the genes to the next generation. Far from preaching selfish behaviour, therefore, this theory explains why we are often altruistic: it is the selfishness of the genes that enables individuals to be selfless. A bee suicidally stinging an animal that threatens the hive is dying for its country (or hive) so that its genes may survive – only in this case the genes are passed on indirectly, through the stinger’s mother, the queen. It makes more sense to see the body as serving the needs of the genes than vice versa. Bottom–up.
One paragraph of Dawkins’s book, little noticed at the time, deserves special attention. It has proved to be the founding text of an extremely important theory. He wrote:
Sex is not the only apparent paradox that becomes less puzzling the moment we learn to think in selfish gene terms. For instance, it appears that the amount of DNA in organisms is more than is strictly necessary for building them: a large fraction of the DNA is never translated into protein. From the point of view of the individual this seems paradoxical. If the ‘purpose’ of DNA is to supervise the building of bodies it is surprising to find a large quantity of DNA which does no such thing. Biologists are racking their brains trying to think what useful task this apparently surplus DNA is doing. From the point of view of the selfish genes themselves, there is no paradox. The true ‘purpose’ of DNA is to survive, no more and no less. The simplest way to explain the surplus DNA is to suppose that it is a parasite, or at best a harmless but useless passenger, hitching a ride in the survival machines created by the other DNA.
One of the people who read that paragraph and began thinking about it was Leslie Orgel, a chemist at the Salk Institute in California. He mentioned it to Francis Crick, who mentioned it in an article about the new and surprising discovery of ‘split genes’ – the fact that most animal and plant genes contain long sequences of DNA called ‘introns’ that are discarded after transcription. Crick and Orgel then wrote a paper expanding on Dawkins’s selfish DNA explanation for all the extra DNA. So, at the same time, did the Canadian molecular biologists Ford Doolittle and Carmen Sapienza. ‘Sequences whose only “function” is self-preservation will inevitably arise and be maintained,’ wrote the latter. The two papers were published simultaneously in 1980.
It turns out that Dawkins was right. What would his theory predict? That the spare DNA would have features that made it good at getting itself duplicated and re-inserted into chromosomes. Bingo. The commonest gene in the human genome is the recipe for reverse transcriptase, an enzyme that the human body has little or no need for, and whose main function is usually to help the spread of retroviruses. Yet there are more copies and half-copies of this gene than of all other human genes combined. Why? Because reverse transcriptase is a key part of any DNA sequence that can copy itself and distribute the copies around the genome. It’s a sign of a digital parasite. Most of the copies are inert these days, and some are even put to good use, helping to regulate real genes or bind proteins. But they are there because they are good at being there.
The skyhook here is a sort of cousin of Locke’s ‘mind-first’ thinking: the assumption that the human good is the only good pursued within our bodies. The alternative view, penetratingly articulated by Dawkins, takes the perspective of the gene itself: how DNA would behave if it could. Close to half of the human genome consists of so-called transposable elements designed to use reverse transcriptase. Some of the commonest are known by names like LINEs (17 per cent of the genome), SINEs (11 per cent) and LTR retrotransposons (8 per cent). Actual genes, by contrast, fill just 2 per cent of the genome. These transposons are sequences that are good at getting themselves copied, and there is no longer a smidgen of doubt that they are (mostly inert) digital parasites. They are not there for the needs of the body at all.
Junk is not the same as garbage
There is a close homology with computer viruses, which did not yet exist when Dawkins suggested the genetic version of the concept of digital parasitism. Some of the transposons, the SINEs, appear to be parasites of parasites, because they use the apparatus of longer, more complete sequences to get themselves disseminated. For all the heroic attempts to see their function in terms of providing variability that might one day lead to a brave new mutation, the truth is that their more immediate and frequent effect is occasionally to disrupt the reading of genes.
Of course, these selfish DNA sequences can thrive only because a small percentage of the genome does something much more constructive – builds a body that grows, learns and adapts sufficiently to its physical and social environment that it can eventually thrive, attract a mate and have babies. At which point the selfish DNA says, ‘Thank you very much, we’ll be making up half the sequence in the children too.’
It is currently impossible to explain the huge proportion of the human genome devoted to these transposons except by reference to the selfish DNA theory. There’s just no other theory that comes close to fitting the facts. Yet it is routinely rejected, vilified and ‘buried’ by commentators on the fringe of genetics. The phrase that really gets their goat is ‘junk DNA’. It’s almost impossible to read an article on the topic without coming across surprisingly passionate denunciations of the ‘discredited’ notion that some of the DNA in a genome is useless. ‘We have long felt that the current disrespectful (in a vernacular sense) terminology of junk DNA and pseudogenes,’ wrote Jürgen Brosius and Stephen Jay Gould in an early salvo in 1992, ‘has been masking the central evolutionary concept that features of no current utility may hold crucial evolutionary importance as recruitable sources of future change.’ Whenever I write about this topic, I am deluged with moralistic denunciations of the ‘arrogance’ of scientists for rejecting unknown functions of DNA sequences. To which I reply: functions for whom? The body or the sequences?
This moral tone to the disapproval of ‘so-called’ junk DNA is common. People seem to be genuinely offended by the phrase. They sound awfully like the defenders of faith confronted with evolution – it’s the bottom–up nature of the story that they dislike. Yet as I shall show, selfish DNA and junk DNA are both about as accurate as metaphors ever can be. And junk is not the same as garbage.
What’s the fuss about? In the 1960s, as I mentioned earlier, molecular biologists began to notice that there seemed to be far more DNA in a cell than was necessary to make all the proteins in the cell. Even with what turned out to be a vastly over-inflated estimate of the number of genes in the human genome – then thought to be more than 100,000, now known to be about 20,000 – genes and their control sequences could account for only a small percentage of the total weight of DNA present in a cell’s chromosomes, at least in mammals. It’s less than 3 per cent in people. Worse, there was emerging evidence that we human beings did not seem to have the heaviest genomes or the most DNA. Humble protozoa, onions and salamanders have far bigger genomes. Grasshoppers have three times as much; lungfish forty times as much. Known by the obscure name of the ‘c-value paradox’, this enigma exercised the minds of some of the most eminent scientists of the day. One of them, Susumu Ohno, coined the term ‘junk DNA’, arguing that much of the DNA might not be under selection – that is to say, might not be being continuously honed by evolution to fit a function of the body.
He was not saying it was garbage. As Sydney Brenner later made plain, people everywhere make the distinction between two kinds of rubbish: ‘garbage’ which has no use and must be disposed of lest it rot and stink, and ‘junk’, which has no immediate use but does no harm and is kept in the attic in case it might one day be put back to use. You put garbage in the rubbish bin; you keep junk in the attic or garage.
Yet the resistance to the idea of junk DNA mounted. As the number of human genes steadily shrank in the 1990s and 2000s, so the desperation to prove that the rest of the genome must have a use (for the organism) grew. The new simplicity of the human genome bothered those who liked to think of the human being as the most complex creature on the planet. Junk DNA was a concept that had to be challenged. The discovery of RNA-coding genes, and of multiple control sequences for adjusting the activity of genes, seemed to offer some straws of hope to grasp. When it became clear that on top of the 5 per cent of the genome that seemed to be specifically protected from change between human beings and related species, another 4 per cent showed some evidence of being under selection, the prestigious journal Science was moved to proclaim ‘no more junk DNA’. What about the other 91 per cent?
In 2012 the anti-junk campaign culminated in a raft of hefty papers from a huge consortium of scientists called ENCODE. These were greeted, as intended, with hype in the media announcing the Death of Junk DNA. By defining non-junk as any DNA that had something biochemical happen to it during normal life, they were able to assert that about 80 per cent of the genome was functional. (And this was in cancer cells, with abnormal patterns of DNA hyperactivity.) That still left 20 per cent with nothing going on. But there are huge problems with this wide definition of ‘function’, because many of the things that happened to the DNA did not imply that the DNA had an actual job to do for the body, merely that it was subject to housekeeping chemical processes. Realising they had gone too far, some of the ENCODE team began to use smaller numbers when interviewed afterwards. One claimed only 20 per cent was functional, before insisting none the less that the term ‘junk DNA’ should be ‘totally expunged from the lexicon’ – which, as Dan Graur of the University of Houston and his colleagues remarked in a splenetic riposte in early 2013, thus invented a new arithmetic according to which 20 per cent is greater than 80 per cent.
If this all seems a bit abstruse, perhaps an analogy will help. The function of the heart, we would surely agree, is to pump blood. That is what natural selection has honed it to do. The heart does other things, such as add to the weight of the body, produce sounds and prevent the pericardium from deflating. Yet to call those the functions of the heart is silly. Likewise, just because junk DNA is sometimes transcribed or altered, that does not mean it has function as far as the body is concerned. In effect, the ENCODE team was arguing that grasshoppers are three times as complex, onions five times and lungfish forty times as complex, as human beings. As the evolutionary biologist Ryan Gregory put it, anyone who thinks he or she can assign a function to every letter in the human genome should be asked why an onion needs a genome that is about five times larger than a person’s.
Who’s resorting to a skyhook here? Not Ohno or Dawkins or Gregory. They are saying the extra DNA just comes about, there not being sufficient selective incentive for the organism to clear out its genomic attic. (Admittedly, the idea of junk in your attic that duplicates itself if you do nothing about it is moderately alarming!) Bacteria, with large populations and brisk competition to grow faster than their rivals, generally do keep their genomes clear of junk. Large organisms do not. Yet there is clearly a yearning that many people have to prefer an explanation that sees the spare DNA as having a purpose for us, not for itself. As Graur puts it, the junk critics have fallen prey to ‘the genomic equivalent of the human propensity to see meaningful patterns in random data’.
Whenever I raised the topic of junk DNA in recent years I was astonished by the vehemence with which I was told by scientists and commentators that I was wrong, that its existence had been disproved. In vain did I point out that on top of the transposons, the genome was littered with ‘pseudogenes’ – rusting hulks of dead genes – not to mention that 96 per cent of the RNA transcribed from genes was discarded before proteins were made from the transcripts (the discards are ‘introns’). Even though some parts of introns and pseudogenes are used in control sequences, it was clear the bulk was just taking up space, its sequence free to change without consequence for the body. Nick Lane argues that even introns are descended from digital parasites, from the period when an archeal cell ingested a bacterium and turned it into the first mitochondrion, only to see its own DNA invaded by selfish DNA sequences from the ingested bacterium: the way introns are spliced out betrays their ancestry as self-splicing introns from bacteria.
Junk DNA reminds us that the genome is built by and for DNA sequences, not by and for the body. The body is an emergent phenomenon consequent upon the competitive survival of DNA sequences, and a means by which the genome perpetuates itself. And though the natural selection that results in evolutionary change is very far from random, the mutations themselves are random. It is a process of blind trial and error.
Red Queen races
Even in the heart of genetics labs there is a long tradition of resistance to the idea that mutation is purely random and comes with no intentionality, even if selection is not random. Theories of directed mutation come and go, and many highly reputable scientists embrace them, though the evidence remains elusive. The molecular biologist Gabby Dover, in his book Dear Mr Darwin, tried to explain the implausible fact that some centipedes have 173 body segments without relying exclusively on natural selection. His argument was basically that it was unlikely that a randomly generated 346-legged centipede survived and bred at the expense of one with slightly fewer legs. He thinks some other explanation is needed for how the centipede got its segments. He finds such an explanation in ‘molecular drive’, an idea that remains frustratingly vague in Dover’s book, but has a strong top–down tinge. In the years since Dover put forward the notion, molecular drive has sunk with little trace, following so many other theories of directed mutation into oblivion. And no wonder: if mutation is directed, then there would have to be a director, and we’re back to the problem of how the director came into existence: who directed the director? Whence came this knowledge of the future that endowed a gene with the capacity to plan a sensible mutation?
In medicine, an understanding of evolution at the genomic level is both the problem and the solution. Bacterial resistance to antibiotics, and chemotherapeutic drug resistance within tumours, are both pure Darwinian evolutionary processes: the emergence of survival mechanisms through selection. The use of antibiotics selects for rare mutations in genes in bacteria that enable them to resist the drugs. The emergence of antibiotic resistance is an evolutionary process, and it can only be combated by an evolutionary process. It is no good expecting somebody to invent the perfect antibiotic, and find some way of using it that does not elicit resistance. We are in an arms race with germs, whether we like it or not. The mantra should always be the Red Queen’s (from Lewis Carroll’s Through the Looking-Glass): ‘Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!’ The search for the next antibiotic must begin long before the last one is ineffective.
That, after all, is how the immune system works. It does not just produce the best antibodies it can find; it sets out to experiment and evolve in real time. Human beings cannot expect to rely upon evolving resistance to parasites quickly enough by the selective death of susceptible people, because our generation times are too long. We have to allow evolution within our bodies within days or hours. And this the immune system is designed to achieve. It contains a system for recombining different forms of proteins to increase their diversity and rapidly multiplying whichever antibody suddenly finds itself in action. Moreover, the genome includes a set of genes whose sole aim seems to be to maintain a huge diversity of forms: the major histocompatibility complex. The job of these 240 or so MHC genes is to present antigens from invading pathogens to the immune system so as to elicit an immune response. They are the most variable genes known, with one – HLA-B – coming in about 1,600 different versions in the human population. There is some evidence that many animals go to some lengths to maintain or enhance the variability further, by, for example, seeking out mates with different MHC genes (detected by smell).
If the battle against microbes is a never-ending, evolutionary arms race, then so is the battle against cancer. A cell that turns cancerous and starts to grow into a tumour, then spreads to other parts of the body, has to evolve by genetic selection as it does so. It has to acquire mutations that encourage it to grow and divide; mutations that ignore the instructions to stop growing or commit suicide; mutations that cause blood vessels to grow into the tumour to supply it with nutrients; and mutations that enable cells to break free and migrate. Few of these mutations will be present in the first cancerous cell, but tumours usually acquire another mutation – one that massively rearranges its genome, thus experimenting on a grand scale, as if unconsciously seeking to find a way by trial and error to acquire these needed mutations.
The whole process looks horribly purposeful, and malign. The tumour is ‘trying’ to grow, ‘trying’ to get a blood supply, ‘trying’ to spread. Yet, of course, the actual explanation is emergent: there is competition for resources and space among the many cells in a tumour, and the one cell that acquires the most helpful mutations will win. It is precisely analogous to evolution in a population of creatures. These days, the cancer cells often need another mutation to thrive: one that will outwit the chemotherapy or radiotherapy to which the cancer is subjected. Somewhere in the body, one of the cancer cells happens to acquire a mutation that defeats the drug. As the rest of the cancer dies away, the descendants of this rogue cell gradually begin to multiply, and the cancer returns. Heartbreakingly, this is what happens all too often in the treatment of cancer: initial success followed by eventual failure. It’s an evolutionary arms race.
The more we understand genomics, the more it confirms evolution.
5 (#ulink_13cfd1fd-229e-559b-bd2d-0bd41b0d7d79)
The Evolution of Culture (#ulink_13cfd1fd-229e-559b-bd2d-0bd41b0d7d79)
And therefore to assume there was one person gave a name
To everything, and that all learned their first words from the same,
Is stuff and nonsense. Why should one human being from among
The rest be able to designate and name things with his tongue
And others not possess the power to do likewise? …
Lucretius, De Rerum Natura, Book 5, lines 1041–5
The development of an embryo into a body is perhaps the most beautiful of all demonstrations of spontaneous order. Our understanding of how it happens grows ever less instructional. As Richard Dawkins writes in his book The Greatest Show on Earth, ‘The key point is that there is no choreographer and no leader. Order, organisation, structure – these all emerge as by-products of rules which are obeyed locally and many times over.’ There is no overall plan, just cells reacting to local effects. It is as if an entire city emerged from chaos just because people responded to local incentives in the way they set up their homes and businesses. (Oh, hang on – that is how cities emerged too.)
Look at a bird’s nest: beautifully engineered to provide protection and camouflage to a family of chicks, made to a consistent (but unique) design for each species, yet constructed by the simplest of instincts with no overall plan in mind, just a string of innate urges. I had a fine demonstration of this one year when a mistle thrush tried to build a nest on the metal fire escape outside my office. The result was a disaster, because each step of the fire escape looked identical, so the poor bird kept getting confused about which step it was building its nest on. Five different steps had partly built nests on them, the middle two being closest to completion, but neither fully built. The bird then laid two eggs in one half-nest and one in another. Clearly it was confused by the local cues provided by the fire-escape steps. Its nest-building program depended on simple rules, like ‘Put more material in corner of metal step.’ The tidy nest of a thrush emerges from the most basic of instincts.
Or look at a tree. Its trunk manages to grow in width and strength just as fast as is necessary to bear the weight of its branches, which are themselves a brilliant compromise between strength and flexibility; its leaves are a magnificent solution to the problem of capturing sunlight while absorbing carbon dioxide and losing as little water as possible: they are wafer-thin, feather-light, shaped for maximum exposure to the light, with their pores on the shady underside. The whole structure can stand for hundreds or even thousands of years without collapsing, yet can also grow continuously throughout that time – a dream that lies far beyond the capabilities of human engineers. All this is achieved without a plan, let alone a planner. The tree does not even have a brain. Its design and implementation emerge from the decisions of its trillions of single cells. Compared with animals, plants dare not rely on brain-directed behaviour, because they cannot run away from grazers, and if a grazer ate the brain, it would mean death. So plants can withstand almost any loss, and regenerate easily. They are utterly decentralised. It is as if an entire country’s economy emerged from just the local incentives and responses of its people. (Oh, hang on …)
Or take a termite mound in the Australian outback. Tall, buttressed, ventilated and oriented with respect to the sun, it is a perfect system for housing a colony of tiny insects in comfort and gentle warmth – as carefully engineered as any cathedral. Yet there is no engineer. The units in this case are whole termites, rather than cells, but the system is no more centralised than in a tree or an embryo. Each grain of sand or mud that is used to construct the mound is carried to its place by a termite acting under no instruction, and with no plan in (no) mind. The insect is reacting to local signals. It is as if a human language, with all its syntax and grammar, were to emerge spontaneously from the actions of its individual speakers, with nobody laying down the rules. (Oh, hang on …)
That is indeed exactly how languages emerged, in just the same fashion that the language of DNA developed – by evolution. Evolution is not confined to systems that run on DNA. One of the great intellectual breakthroughs of recent decades, led by two evolutionary theorists named Rob Boyd and Pete Richerson, is the realisation that Darwin’s mechanism of selective survival resulting in cumulative complexity applies to human culture in all its aspects too. Our habits and our institutions, from language to cities, are constantly changing, and the mechanism of change turns out to be surprisingly Darwinian: it is gradual, undirected, mutational, inexorable, combinatorial, selective and in some vague sense progressive.
Scientists used to object that evolution could not occur in culture because culture did not come in discrete particles, nor did it replicate faithfully or mutate randomly, like DNA. This turns out not to be true. Darwinian change is inevitable in any system of information transmission so long as there is some lumpiness in the things transmitted, some fidelity of transmission and a degree of randomness, or trial and error, in innovation. To say that culture ‘evolves’ is not metaphorical.
The evolution of language
There is an almost perfect parallel between the evolution of DNA sequences and the evolution of written and spoken language. Both consist of linear digital codes. Both evolve by selective survival of sequences generated by at least partly random variation. Both are combinatorial systems capable of generating effectively infinite diversity from a small number of discrete elements. Languages mutate, diversify, evolve by descent with modification and merge in a ballet of unplanned beauty. Yet the end result is structure, and rules of grammar and syntax as rigid and formal as you could want. ‘The formation of different languages, and of distinct species, and the proofs that both have been developed through a gradual process, are curiously parallel,’ wrote Charles Darwin in TheDescent of Man.
This makes it possible to think of language as a designed and rule-based thing. And for generations, this was the way foreign languages were taught. At school I learned Latin and Greek as if they were cricket or chess: you can do this, but not that, to verbs, nouns and plurals. A bishop can move diagonally, a batsman can run a leg bye, and a verb can take the accusative. Eight years of this rule-based stuff, taught by some of the finest teachers in the land for longer hours each week than any other topic, and I was far from fluent – indeed, I quickly forgot what little I had learned once I was allowed to abandon Latin and Greek. Top–down language teaching just does not work well – it’s like learning to ride a bicycle in theory, without ever getting on one. Yet a child of two learns English, which has just as many rules and regulations as Latin, indeed rather more, without ever being taught. An adolescent picks up a foreign language, conventions and all, by immersion. Having a training in grammar does not (I reckon) help prepare you for learning a new language much, if at all. It’s been staring us in the face for years: the only way to learn a language is bottom–up.