banner banner banner
The Evolution of Everything: How Small Changes Transform Our World
The Evolution of Everything: How Small Changes Transform Our World
Оценить:
Рейтинг: 0

Полная версия:

The Evolution of Everything: How Small Changes Transform Our World

скачать книгу бесплатно


Certainly, there do seem to be some remarkably fortuitous features of our own universe without which life would be impossible. If the cosmological constant were any larger, the pressure of antigravity would be greater and the universe would have blown itself to smithereens long before galaxies, stars and planets could have evolved. Electrical and nuclear forces are just the right strength for carbon to be one of the most common elements, and carbon is vital to life because of its capacity to form multiple bonds. Molecular bonds are just the right strength to be stable but breakable at the sort of temperatures found at the typical distance of a planet from a star: any weaker and the universe would be too hot for chemistry, any stronger and it would be too cold.

True, but to anybody outside a small clique of cosmologists who had spent too long with their telescopes, the idea of the anthropic principle was either banal or barmy, depending on how seriously you take it. It so obviously confuses cause and effect. Life adapted to the laws of physics, not vice versa. In a world where water is liquid, carbon can polymerise and solar systems last for billions of years, then life emerged as a carbon-based system with water-soluble proteins in fluid-filled cells. In a different world, a different kind of life might emerge, if it could. As David Waltham puts it in his book Lucky Planet, ‘It is all but inevitable that we occupy a favoured location, one of the rare neighbourhoods where by-laws allow the emergence of intelligent life.’ No anthropic principle needed.

Waltham himself goes on to make the argument that the earth may be rare or even unique because of the string of ridiculous coincidences required to produce a planet with a stable temperature with liquid water on it for four billion years. The moon was a particular stroke of luck, having been formed by an interplanetary collision and having then withdrawn slowly into space as a result of the earth’s tides (it is now ten times as far away as when it first formed). Had the moon been a tiny bit bigger or smaller, and the earth’s day a tiny bit longer or shorter after the collision, then we would have had an unstable axis and a tendency to periodic life-destroying climate catastrophes that would have precluded the emergence of intelligent life. God might claim credit for this lunar coincidence, but Gaia – James Lovelock’s theory that life itself controls the climate – cannot. So we may be extraordinarily lucky and vanishingly rare. But that does not make us special: we would not be here if it had not worked out so far.

Leave the last word on the anthropic principle to Douglas Adams: ‘Imagine a puddle waking up one morning and thinking, “This is an interesting world I find myself in – an interesting hole I find myself in – fits me rather neatly, doesn’t it? In fact it fits me staggeringly well, may have been made to have me in it!”’

Thinking for ourselves

It is no accident that political and economic enlightenment came in the wake of Newton and his followers. As David Bodanis argues in his biography of Voltaire and his mistress, Passionate Minds, people would be inspired by Newton’s example to question traditions around them that had apparently been accepted since time immemorial. ‘Authority no longer had to come from what you were told by a priest or a royal official, and the whole establishment of the established church or the state behind them. It could come, dangerously, from small, portable books – and even from ideas you came to yourself.’

Gradually, by reading Lucretius and by experiment and thought, the Enlightenment embraced the idea that you could explain astronomy, biology and society without recourse to intelligent design. Nikolaus Copernicus, Galileo Galilei, Baruch Spinoza and Isaac Newton made their tentative steps away from top–down thinking and into the bottom–up world. Then, with gathering excitement, Locke and Montesquieu, Voltaire and Diderot, Hume and Smith, Franklin and Jefferson, Darwin and Wallace, would commit similar heresies against design. Natural explanations displaced supernatural ones. The emergent world emerged.

2 (#ulink_3fcd0563-a66a-56f8-9b26-8b871f49f4f3)

The Evolution of Morality (#ulink_3fcd0563-a66a-56f8-9b26-8b871f49f4f3)

O miserable minds of men! O hearts that cannot see!

Beset by such great dangers and in such obscurity

You spend your lot of life! Don’t you know it’s plain

That all your nature yelps for is a body free from pain,

And, to enjoy pleasure, a mind removed from fear and care?

Lucretius, De Rerum Natura, Book 2, lines 1–5

Soon a far more subversive thought evolved from the followers of Lucretius and Newton. What if morality itself was not handed down from the Judeo-Christian God as a prescription? And was not even the imitation of a Platonic ideal, but was a spontaneous thing produced by social interaction among people seeking to find ways to get along? In 1689, John Locke argued for religious tolerance – though not for atheists or Catholics – and brought a storm of protest down upon his head from those who saw government enforcement of religious orthodoxy as the only thing that prevented society from descending into chaos. But the idea of spontaneous morality did not die out, and some time later David Hume and then Adam Smith began to dust it off and show it to the world: morality as a spontaneous phenomenon. Hume realised that it was good for society if people were nice to each other, so he thought that rational calculation, rather than moral instruction, lay behind social cohesion. Smith went one step further, and suggested that morality emerged unbidden and unplanned from a peculiar feature of human nature: sympathy.

Quite how a shy, awkward, unmarried professor from Kirkcaldy who lived with his mother and ended his life as a customs inspector came to have such piercing insights into human nature is one of history’s great mysteries. But Adam Smith was lucky in his friends. Being taught by the brilliant Irish lecturer Francis Hutcheson, talking regularly with David Hume, and reading Denis Diderot’s new Encyclopédie, with its relentless interest in bottom–up explanations, gave him plenty with which to get started. At Balliol College, Oxford, he found the lecturers ‘had altogether given up even the pretence of teaching’, but the library was ‘marvellous’. Teaching in Glasgow gave him experience of merchants in a thriving trading port and ‘a feudal, Calvinist world dissolving into a commercial, capitalist one’. Glasgow had seen explosive growth thanks to increasing trade with the New World in the eighteenth century, and was fizzing with entrepreneurial energy. Later, floating around France as the tutor to the young Duke of Buccleuch enabled Smith to meet d’Holbach and Voltaire, who thought him ‘an excellent man. We have nothing to compare with him.’ But that was after his first, penetrating book on human nature and the evolution of morality. Anyway, somehow this shy Scottish man stumbled upon the insights to explore two gigantic ideas that were far ahead of their time. Both concerned emergent, evolutionary phenomena: things that are the result of human action, but not the result of human design.

Adam Smith spent his life exploring and explaining such emergent phenomena, beginning with language and morality, moving on to markets and the economy, ending with the law, though he never published his planned book on jurisprudence. Smith began lecturing on moral philosophy at Glasgow University in the 1750s, and in 1759 he put together his lectures as a book, The Theory of Moral Sentiments. Today it seems nothing remarkable: a dense and verbose eighteenth-century ramble through ideas about ethics. It is not a rattling read. But in its time it was surely one of the most subversive books ever written. Remember that morality was something that you had to be taught, and that without Jesus telling us what to teach, could not even exist. To try to raise a child without moral teaching and expect him to behave well was like raising him without Latin and expecting him to recite Virgil. Adam Smith begged to differ. He thought that morality owed little to teaching and nothing to reason, but evolved by a sort of reciprocal exchange within each person’s mind as he or she grew from childhood, and within society. Morality therefore emerged as a consequence of certain aspects of human nature in response to social conditions.

As the Adam Smith scholar James Otteson has observed, Smith, who wrote a history of astronomy early in his career, saw himself as following explicitly in Newton’s footsteps, both by looking for regularities in natural phenomena and by employing the parsimony principle of using as simple an explanation as possible. He praised Newton in his history of astronomy for the fact that he ‘discovered that he could join together the movement of the planets by so familiar a principle of connection’. Smith was also part of a Scottish tradition that sought cause and effect in the history of a topic: instead of asking what is the perfect Platonic ideal of a moral system, ask rather how it came about.

It was exactly this modus operandi that Smith brought to moral philosophy. He wanted to understand where morality came from, and to explain it simply. As so often with Adam Smith, he deftly avoided the pitfalls into which later generations would fall. He saw straight through the nature-versus-nurture debate and came up with a nature-via-nurture explanation that was far ahead of its time. He starts The Theory of Moral Sentiments with a simple observation: we all enjoy making other people happy.

How selfish soever man may be supposed, there are evidently some principles in his nature, which interest him in the fortunes of others, and render their happiness necessary to him, though he derives nothing from it, but the pleasure of seeing it.

And we all desire what he calls mutual sympathy of sentiments: ‘Nothing pleases us more than to observe in other men a fellow-feeling with all the emotions of our own breast.’ Yet the childless Smith observed that a child does not have a sense of morality, and has to find out the hard way that he or she is not the centre of the universe. Gradually, by trial and error, a child discovers what behaviour leads to mutual sympathy of sentiments, and therefore can make him or her happy by making others happy. It is through everybody accommodating their desires to those of others that a system of shared morality arises, according to Smith. An invisible hand (the phrase first appears in Smith’s lectures on astronomy, then here in Moral Sentiments and once more in The Wealth of Nations) guides us towards a common moral code. Otteson explains that the hand is invisible, because people are not setting out to create a shared system of morality; they aim only to achieve mutual sympathy now with the people they are dealing with. The parallel with Smith’s later explanation of the market is clear to see: both are phenomena that emerge from individual actions, but not from deliberate design.

Smith’s most famous innovation in moral philosophy is the ‘impartial spectator’, who we imagine to be watching over us when we are required to be moral. In other words, just as we learn to be moral by judging others’ reactions to our actions, so we can imagine those reactions by positing a neutral observer who embodies our conscience. What would a disinterested observer, who knows all the facts, think of our conduct? We get pleasure from doing what he recommends, and guilt from not doing so. Voltaire put it pithily: ‘The safest course is to do nothing against one’s conscience. With this secret, we can enjoy life and have no fear from death.’

How morality emerges

There is, note, no need for God in this philosophy. As a teacher of Natural Theology among other courses, Smith was no declared atheist, but occasionally he strays dangerously close to Lucretian scepticism. It is hardly surprising that he at least paid lip service to God, because three of his predecessors at Glasgow University, including Hutcheson, had been charged with heresy for not sticking to Calvinist orthodoxy. The mullahs of the day were vigilant. There remains one tantalising anecdote from a student, a disapproving John Ramsay, that Smith ‘petitioned the Senatus … to be relieved of the duty of opening his class with a prayer’, and, when refused, that his lectures led his students to ‘draw an unwarranted conclusion, viz. that the great truths of theology, together with the duties which man owes to God and his neighbours, may be discovered in the light of nature without any special revelation’. The Adam Smith scholar Gavin Kennedy points out that in the sixth edition (1789) of The Theory of Moral Sentiments, published after his devout mother died, Smith excised or changed many religious references. He may have been a closet atheist, but he might also have been a theist, not taking Christianity literally, but assuming that some kind of god implanted benevolence in the human breast.

Morality, in Smith’s view, is a spontaneous phenomenon, in the sense that people decide their own moral codes by seeking mutual sympathy of sentiments in society, and moralists then observe and record these conventions and teach them back to people as top–down instructions. Smith is essentially saying that the priest who tells you how to behave is basing his moral code on observations of what moral people actually do.

There is a good parallel with teachers of grammar, who do little more than codify the patterns they see in everyday speech and tell them back to us as rules. Only occasionally, as with split infinitives, do their rules go counter to what good writers do. Of course, it is possible for a priest to invent and promote a new rule of morality, just as it is possible for a language maven to invent and promote a new rule of grammar or syntax, but it is remarkably rare. In both cases, what happens is that usage changes and the teachers gradually go along with it, sometimes pretending to be the authors.

So, for example, in my lifetime, disapproval of homosexuality has become ever more morally unacceptable in the West, while disapproval of paedophilia has become ever more morally mandatory. Male celebrities who broke the rules with under-age girls long ago and thought little of it now find themselves in court and in disgrace; while others who broke the (then) rules with adult men long ago and risked disgrace can now openly speak of their love. Don’t get me wrong: I approve of both these trends – but that’s not my point. My point is that the changes did not come about because some moral leader or committee ordained them, at least not mainly, let alone that some biblical instruction to make the changes came to light. Rather, the moral negotiation among ordinary people gradually changed the common views in society, with moral teachers reflecting the changes along the way. Morality, quite literally, evolved. In just the same way, words like ‘enormity’ and ‘prevaricate’ have changed their meaning in my lifetime, though no committee met to consider an alteration in the meaning of the words, and there is very little the grammarians can do to prevent it. (Indeed, grammarians spend most of their time deploring linguistic innovation.) Otteson points out that Smith in his writing uses the word ‘brothers’ and ‘brethren’ interchangeably, with a slight preference for the latter. Today, however, the rules have changed, and you would only use ‘brethren’ for the plural of brothers if you were being affected, antiquarian or mocking.

Smith was acutely aware of this parallel with language, which is why he insisted on appending his short essay on the origin of language to his Theory of Moral Sentiments in its second and later editions. In the essay, Smith makes the point that the laws of language are an invention, rather than a discovery – unlike, say, the laws of physics. But they are still laws: children are corrected by their parents and their peers if they say ‘bringed’ instead of ‘brought’. So language is an ordered system, albeit arrived at spontaneously through some kind of trial and error among people trying to make ‘their mutual wants intelligible to each other’. Nobody is in charge, but the system is orderly. What a peculiar and novel idea. What a subversive thought. If God is not needed for morality, and if language is a spontaneous system, then perhaps the king, the pope and the official are not quite as vital to the functioning of an orderly society as they pretend?

As the American political scientist Larry Arnhart puts it, Smith is a founder of a key tenet of liberalism, because he rejects the Western tradition that morality must conform to a transcendental cosmic order, whether in the form of a cosmic God, a cosmic Reason, or a cosmic Nature. ‘Instead of this transcendental moral cosmology, liberal morality is founded on an empirical moral anthropology, in which moral order arises from within human experience.’

Above all, Smith allows morality and language to change, to evolve. As Otteson puts it, for Smith, moral judgements are generalisations arrived at inductively on the basis of past experience. We log our own approvals and disapprovals of our own and others’ conduct, and observe others doing the same. ‘Frequently repeated patterns of judgement can come to have the appearance of moral duties or even commandments from on high, while patterns that recur with less frequency will enjoy commensurately less confidence.’ It is in the messy empirical world of human experience that we find morality. Moral philosophers observe what we do; they do not invent it.

Better angels

Good grief. Here is an eighteenth-century, middle-class Scottish professor saying that morality is an accidental by-product of the way human beings adjust their behaviour towards each other as they grow up; saying that morality is an emergent phenomenon that arises spontaneously among human beings in a relatively peaceful society; saying that goodness does not need to be taught, let alone associated with the superstitious belief that it would not exist but for the divine origin of an ancient Palestinian carpenter. Smith sounds remarkably like Lucretius (whom he certainly read) in parts of his Moral Sentiments book, but he also sounds remarkably like Steven Pinker of Harvard University today discussing the evolution of society towards tolerance and away from violence.

As I will explore, there is in fact a fascinating convergence here. Pinker’s account of morality growing strongly over time is, at bottom, very like Smith’s. To put it at its baldest, a Smithian child, developing his sense of morality in a violent medieval society in Prussia (say) by trial and error, would end up with a moral code quite different from such a child growing up in a peaceful German (say) suburb today. The medieval person would be judged moral if he killed people in defence of his honour or his city; whereas today he would be thought moral if he refused meat and gave copiously to charity, and thought shockingly immoral if he killed somebody for any reason at all, and especially for honour. In Smith’s evolutionary view of morality, it is easy to see how morality is relative and will evolve to a different end point in different societies, which is exactly what Pinker documents.

Pinker’s book The Better Angels of Our Nature chronicles the astonishing and continuing decline in violence of recent centuries. We have just lived through the decade with the lowest global death rate in warfare on record; we have seen homicide rates fall by 99 per cent in most Western countries since medieval times; we have seen racial, sexual, domestic, corporal, capital and other forms of violence in headlong retreat; we have seen discrimination and prejudice go from normal to disgraceful; we have come to disapprove of all sorts of violence as entertainment, even against animals. This is not to say there is no violence left, but the declines that Pinker documents are quite remarkable, and our horror at the violence that still remains implies that the decline will continue. Our grandchildren will stand amazed at some of the things we still find quite normal.

To explain these trends, Pinker turns to a theory first elaborated by Norbert Elias, who had the misfortune to publish it as a Jewish refugee from Germany in Britain in 1939, shortly before he was interned by the British on the grounds that he was German. Not a good position from which to suggest that violence and coercion were diminishing. It was not until it was translated into English three decades later in 1969, in a happier time, that his theory was widely appreciated. Elias argued that a ‘civilising process’ had sharply altered the habits of Europeans since the Middle Ages, that as people became more urban, crowded, capitalist and secular, they became nicer too. He hit upon this paradoxical realisation – for which there is now, but was not then, strong statistical evidence – by combing the literature of medieval Europe and documenting the casual, frequent and routine violence that was then normal. Feuds flared into murders all the time; mutilation and death were common punishments; religion enforced its rules with torture and sadism; entertainments were often violent. Barbara Tuchman in her book A Distant Mirror gives an example of a popular game in medieval France: people with their hands tied behind their backs competed to kill a cat nailed to a post by battering it with their heads, risking the loss of an eye from the scratching of the desperate cat in the process. Ha ha.

Elias argued that moral standards evolved; to illustrate the point he documented the etiquette guides published by Erasmus and other philosophers. These guides are full of suggestions about table manners, toilet manners and bedside manners that seem unnecessary to state, but are therefore revealing: ‘Don’t greet someone while they are urinating or defecating … don’t blow your nose on to the table-cloth or into your fingers, sleeve or hat … turn away when spitting lest your saliva fall on someone … don’t pick your nose while eating.’ In short, the very fact that these injunctions needed mentioning implies that medieval European life was pretty disgusting by modern standards. Pinker comments: ‘These are the kind of directives you’d expect a parent to give to a three-year-old, not a great philosopher to a literate readership.’ Elias argued that the habits of refinement, self-control and consideration that are second nature to us today had to be acquired. As time went by, people ‘increasingly inhibited their impulses, anticipated the long-term consequences of their actions, and took other people’s thoughts and feelings into consideration’. In other words, not blowing your nose on the tablecloth was all one with not stabbing your neighbour. It’s a bit like a historical version of the broken-window theory: intolerance of small crimes leads to intolerance of big ones.

Doux commerce

But how were these gentler habits acquired? Elias realised that we have internalised the punishment for breaking these rules (and the ones against more serious violence) in the form of a sense of shame. That is to say, just as Adam Smith argued, we rely on an impartial spectator, and we learned earlier and earlier in life to see his point of view as he became ever more censorious. But why? Elias and Pinker give two chief reasons: government and commerce. With an increasingly centralised government focused on the king and his court, rather than local warlords, people had to behave more like courtiers and less like warriors. That meant not only less violent, but also more refined. Leviathan enforced the peace, if only to have more productive peasants to tax. Revenge for murder was nationalised as a crime to be punished, rather than privatised as a wrong to be righted. At the same time, commerce led people to value the opportunity to be trusted by a stranger in a transaction. With increasingly money-based interactions among strangers, people increasingly began to think of neighbours as potential trading partners rather than potential prey. Killing the shopkeeper makes no sense. So empathy, self-control and morality became second nature, though morality was always a double-edged sword, as likely to cause violence as to prevent it through most of history.

Lao Tzu saw this twenty-six centuries ago: ‘The more prohibitions you have, the less virtuous people will be.’ Montesquieu’s phrase for the calming effect of trade on human violence, intolerance and enmity was ‘doux commerce’ – sweet commerce. And he has been amply vindicated in the centuries since. The richer and more market-oriented societies have become, the nicer people have behaved. Think of the Dutch after 1600, the Swedes after 1800, the Japanese after 1945, the Germans likewise, the Chinese after 1978. The long peace of the nineteenth century coincided with the growth of free trade. The paroxysm of violence that convulsed the world in the first half of the twentieth century coincided with protectionism.

Countries where commerce thrives have far less violence than countries where it is suppressed. Does Syria suffer from a surfeit of commerce? Or Zimbabwe? Or Venezuela? Is Hong Kong largely peaceful because it eschews commerce? Or California? Or New Zealand? I once interviewed Pinker in front of an audience in London, and was very struck by the passion of his reply when an audience member insisted that profit was a form of violence and was on the increase. Pinker simply replied with a biographical story. His grandfather, born in Warsaw in 1900, emigrated to Montreal in 1926, worked for a shirt company (the family had made gloves in Poland), was laid off during the Great Depression, and then, with his grandmother, sewed neckties in his apartment, eventually earning enough to set up a small factory, which they ran until their deaths. And yes, it made a small profit (just enough to pay the rent and bring up Pinker’s mother and her brothers), and no, his grandfather never hurt a fly. Commerce, he said, cannot be equated with violence.

‘Participation in capitalist markets and bourgeois virtues has civilized the world,’ writes Deirdre McCloskey in her book The Bourgeois Virtues. ‘Richer and more urban people, contrary to what the magazines of opinion sometimes suggest, are less materialistic, less violent, less superficial than poor and rural people’ (emphasis in original).

How is it then that conventional wisdom – especially among teachers and religious leaders – maintains that commerce is the cause of nastiness, not niceness? That the more we grow the economy and the more we take part in ‘capitalism’, the more selfish, individualistic and thoughtless we become? This view is so widespread it even leads such people to assume – against the evidence – that violence is on the increase. As Pope Francis put it in his 2013 apostolic exhortation Evangelii Gaudium, ‘unbridled’ capitalism has made the poor miserable even as it enriched the rich, and is responsible for the fact that ‘lack of respect for others and violence are on the rise’. Well, this is just one of those conventional wisdoms that is plain wrong. There has been a decline in violence, not an increase, and it has been fastest in the countries with the least bridled versions of capitalism – not that there is such a thing as unbridled capitalism anywhere in the world. The ten most violent countries in the world in 2014 – Syria, Afghanistan, South Sudan, Iraq, Somalia, Sudan, Central African Republic, Democratic Republic of the Congo, Pakistan and North Korea – are all among the least capitalist. The ten most peaceful – Iceland, Denmark, Austria, New Zealand, Switzerland, Finland, Canada, Japan, Belgium and Norway – are all firmly capitalist.

My reason for describing Pinker’s account of the Elias theory in such detail is because it is a thoroughly evolutionary argument. Even when Pinker credits Leviathan – government policy – for reducing violence, he implies that the policy is as much an attempt to reflect changing sensibility as to change sensibility. Besides, even Leviathan’s role is unwitting: it did not set out to civilise, but to monopolise. It is an extension of Adam Smith’s theory, uses Smith’s historical reasoning, and posits that the moral sense, and the propensity to violence and sordid behaviour, evolve. They evolve not because somebody ordains that they should evolve, but spontaneously. The moral order emerges and continually changes. Of course, it can evolve towards greater violence, and has done so from time to time, but mostly it has evolved towards peace, as Pinker documents in exhaustive detail. In general, over the past five hundred years in Europe and much of the rest of the world, people became steadily less violent, more tolerant and more ethical, without even realising they were doing so. It was not until Elias spotted the trend in words, and later historians then confirmed it in statistics, that we even knew it was happening. It happened to us, not we to it.

The evolution of law

It is an extraordinary fact, unremembered by most, that in the Anglosphere people live by laws that did not originate with governments at all. British and American law derives ultimately from the common law, which is a code of ethics that was written by nobody and everybody. That is to say, unlike the Ten Commandments or most statute law, the common law emerges and evolves through precedent and adversarial argument. It ‘evolves incrementally, rather than leaps convulsively or stagnates idly’, in the words of legal scholar Allan Hutchinson. It is ‘a perpetual work-in-progress – evanescent, dynamic, messy, productive, tantalizing, and bottom up’. The author Kevin Williamson reminds us to be astonished by this fact: ‘The most successful, most practical, most cherished legal system in the world did not have an author. Nobody planned it, no sublime legal genius thought it up. It emerged in an iterative, evolutionary manner much like a language emerges.’ Trying to replace the common law with a rationally designed law is, he jests, like trying to design a better rhinoceros in a laboratory.

Judges change the common law incrementally, adjusting legal doctrine case by case to fit the facts on the ground. When a new puzzle arises, different judges come to different conclusions about how to deal with it, and the result is a sort of genteel competition, as successive courts gradually choose which line they prefer. In this sense, the common law is built by natural selection.

Common law is a peculiarly English development, found mainly in countries that are former British colonies or have been influenced by the Anglo-Saxon tradition, such as Australia, India, Canada and the United States. It is a beautiful example of spontaneous order. Before the Norman Conquest, different rules and customs applied in different regions of England. But after 1066 judges created a common law by drawing on customs across the country, with an occasional nod towards the rulings of monarchs. Powerful Plantagenet kings such as Henry II set about standardising the laws to make them consistent across the country, and absorbed much of the common law into the royal courts. But they did not invent it. By contrast, European rulers drew on Roman law, and in particular a compilation of rules issued by the Emperor Justinian in the sixth century that was rediscovered in eleventh-century Italy. Civil law, as practised on the continent of Europe, is generally written by government.

In common law, the elements needed to prove the crime of murder, for instance, are contained in case law rather than defined by statute. To ensure consistency, courts abide by precedents set by higher courts examining the same issue. In civil-law systems, by contrast, codes and statutes are designed to cover all eventualities, and judges have a more limited role of applying the law to the case in hand. Past judgements are no more than loose guides. When it comes to court cases, judges in civil-law systems tend towards being investigators, while their peers in common-law systems act as arbiters between parties that present their arguments.

Which of these systems you prefer depends on your priorities. Jeremy Bentham argued that the common law lacked coherence and rationality, and was a repository of ‘dead men’s thoughts’. The libertarian economist Gordon Tullock, a founder of the public-choice school, argued that the common-law method of adjudication is inherently inferior because of its duplicative costs, inefficient means of ascertaining the facts, and scope for wealth-destroying judicial activism.

Others respond that the civil-law tradition, in its tolerance of arbitrary confiscation by the state and its tendency to mandate that which it does not outlaw, has proved less a friend of liberty than the common law. Friedrich Hayek advanced the view that the common law contributed to greater economic welfare because it was less interventionist, less under the tutelage of the state, and was better able to respond to change than civil legal systems; indeed, it was for him a legal system that led, like the market, to a spontaneous order.

A lot of Britain’s continuing discomfort with the European Union derives from the contrast between the British tradition of bottom–up law-making and the top–down Continental version. The European Parliament member Daniel Hannan frequently reminds his colleagues of the bias towards liberty of the common law: ‘This extraordinary, sublime idea that law does not emanate from the state but that rather there was a folk right of existing law that even the king and his ministers were subject to.’

The competition between these two traditions is healthy. But the point I wish to emphasise is that it is perfectly possible to have law that emerges, rather than is created. To most people that is a surprise. They vaguely assume in the backs of their minds that the law is always invented, rather than that it evolved. As the economist Don Boudreaux has argued, ‘Law’s expanse is so vast, its nuances so many and rich, and its edges so frequently changing that the popular myth that law is that set of rules designed and enforced by the state becomes increasingly absurd.’

It is not just the common law that evolves through replication, variation and selection. Even civil law, and constitutional interpretation, see gradual changes, some of which stick and some of which do not. The decisions as to which of these changes stick are not taken by omniscient judges, and nor are they random; they are chosen by the process of selection. As the legal scholar Oliver Goodenough argues, this places the evolutionary explanation at the heart of the system as opposed to appealing to an outside force. Both ‘God made it happen’ and ‘Stuff happens’ are external causes, whereas evolution is a ‘rule-based cause internal to time and space as we experience them’.

3 (#ulink_05d38416-4258-52f9-a1e0-5a92166777f6)

The Evolution of Life (#ulink_05d38416-4258-52f9-a1e0-5a92166777f6)

A mistake I strongly urge you to avoid for all you’re worth,

An error in this matter you should give the widest berth:

Namely don’t imagine that the bright lights of your eyes

Were purpose made so we could look ahead, or that our thighs

And calves were hinged together at the joints and set on feet

So we could walk with lengthy stride, or that forearms fit neat

To brawny upper arms, and are equipped on right and left

With helping hands, solely that we be dexterous and deft

At undertaking all the things we need to do to live,

This rationale and all the others like it people give,

Jumbles effect and cause, and puts the cart before the horse …

Lucretius, De Rerum Natura, Book 4, lines 823–33

Charles Darwin did not grow up in an intellectual vacuum. It is no accident that alongside his scientific apprenticeship he had a deep inculcation in the philosophy of the Enlightenment. Emergent ideas were all around him. He read his grandfather’s Lucretius-emulating poems. ‘My studies consist in Locke and Adam Smith,’ he wrote from Cambridge, citing two of the most bottom–up philosophers. Probably it was Smith’s The Moral Sentiments that he read, since it was more popular in universities than The Wealth of Nations. Indeed, one of the books that Darwin read in the autumn of 1838 after returning from the voyage of the Beagle and when about to crystallise the idea of natural selection was Dugald Stewart’s biography of Adam Smith, from which he got the idea of competition and emergent order. The same month he read, or reread, the political economist Robert Malthus’s essay on population, and was struck by the notion of a struggle for existence in which some thrived and others did not, an idea which helped trigger the insight of natural selection. He was friendly at the time with Harriet Martineau, a firebrand radical who campaigned for the abolition of slavery and also for the ‘marvellous’ free-market ideas of Adam Smith. She was a close confidante of Malthus. Through his mother’s (and future wife’s) family, the Wedgwoods, Darwin moved in a circle of radicalism, trade and religious dissent, meeting people like the free-market MP and thinker James Mackintosh. The evolutionary biologist Stephen Jay Gould once went so far as to argue that natural selection ‘should be viewed as an extended analogy … to the laissez-faire economics of Adam Smith’. In both cases, Gould argued, balance and order emerged from the actions of individuals, not from external or divine control. As a Marxist, Gould surprisingly approved of this philosophy – for biology, but not for economics: ‘It is ironic that Adam Smith’s system of laissez faire does not work in his own domain of economics, for it leads to oligopoly and revolution.’

In short, Charles Darwin’s ideas evolved, themselves, from ideas of emergent order in human society that were flourishing in early-nineteenth-century Britain. The general theory of evolution came before the special theory. All the same, Darwin faced a formidable obstacle in getting people to see undirected order in nature. That obstacle was the argument from design as set out, most ably, by William Paley.

In the last book that he published, in 1802, the theologian William Paley set out the argument for biological design based upon purpose. In one of the finest statements of design logic, from an indubitably fine mind, he imagined stubbing his toe against a rock while crossing a heath, then imagined his reaction if instead his toe had encountered a watch. Picking up the watch, he would conclude that it was man-made: ‘There must have existed, at some time, and at some place or other, an artificer or artificers, who formed [the watch] for the purpose which we find it actually to answer; who comprehended its construction, and designed its use.’ If a watch implies a watchmaker, then how could the exquisite purposefulness of an animal not imply an animal-maker? ‘Every indication of contrivance, every manifestation of design, which existed in the watch, exists in the works of nature; with the difference, on the side of nature, of being greater or more, and that in a degree which exceeds all computation.’

Paley’s argument from design was not new. It was Newton’s logic applied to biology. Indeed, it was a version of one of the five arguments for the existence of God advanced by Thomas Aquinas six hundred years before: ‘Whatever lacks intelligence cannot move towards an end, unless it be directed by some being endowed with knowledge and intelligence.’ And in 1690 the high priest of common sense himself, John Locke, had effectively restated the same idea as if it were so rational that nobody could deny it. Locke found it ‘as impossible to conceive that ever bare incogitative Matter should produce a thinking, intelligent being, as that nothing should produce Matter’. Mind came first, not matter. As Dan Dennett has pointed out, Locke gave an empirical, secular, almost mathematical stamp of approval to the idea that God was the designer.

Hume’s swerve

The first person to dent this cosy consensus was David Hume. In a famous passage from his Dialogues Concerning Natural Religion (published posthumously in 1779), Hume has Cleanthes, his imaginary theist, state the argument from design in powerful and eloquent words:

Look around the world: Contemplate the whole and every part of it: You will find it to be nothing but one great machine, subdivided into an infinite number of lesser machines … All these various machines, and even their most minute parts, are adjusted to each other with an accuracy, which ravishes into admiration all men, who have ever contemplated them. The curious adapting of means to ends, exceeds the productions of human contrivance; of human design, thought, wisdom, and intelligence. Since, therefore the effects resemble each other, we are led to infer, by all the rules of analogy, that the causes also resemble. [Dialogues, 2.5/143]

It’s an inductive inference, Dennett points out: where there’s design there’s a designer, just as where there’s smoke there’s fire.

But Philo, Cleanthes’s imaginary deist interlocutor, brilliantly rebuts the logic. First, it immediately prompts the question of who designed the designer. ‘What satisfaction is there in that infinite progression?’ Then he points out the circular reasoning: God’s perfection explains the world’s design, which proves God’s perfection. And then, how do we know that God is perfect? Might he not have been a ‘stupid mechanic, who imitated others’ and ‘botched and bungled’ his way through different worlds during ‘infinite ages of world making’? Or might not the same argument prove God to be multiple gods, or a ‘perfect anthropomorphite’ with human form, or an animal, or a tree, or a ‘spider who spun the whole complicated mass from his bowels’?

Hume was now enjoying himself. Echoing the Epicureans, he began to pick holes in all the arguments of natural theology. A true believer, Philo said, would stress ‘that there is a great and immeasurable, because incomprehensible, difference between the human and the divine mind’, so it is idolatrous blasphemy to compare the deity to a mere engineer. An atheist, on the other hand, might be happy to concede the purposefulness of nature but explain it by some analogy other than a divine intelligence – as Charles Darwin eventually did.

In short, Hume, like Voltaire, had little time for divine design. By the time he finished, his alter ego Philo had effectively demolished the entire argument from design. Yet even Hume, surveying the wreckage, suddenly halted his assault and allowed the enemy forces to escape the field. In one of the great disappointments in all philosophy, Philo suddenly agrees with Cleanthes at the end, stating that if we are not content to call the supreme being God, then ‘what can we call him but Mind or Thought’? It’s Hume’s Lucretian swerve. Or is it? Anthony Gottlieb argues that if you read it carefully, Hume has buried a subtle hint here, designed not to disturb the pious and censorious even after his death, that mind may be matter.

Dennett contends that Hume’s failure of nerve cannot be explained by fear of persecution for atheism. He arranged to have his book published after his death. In the end it was sheer incredulity that caused him to balk at the ultimate materialist conclusion. Without the Darwinian insight, he just could not see a mechanism by which purpose came from matter.

Through the gap left by Hume stole William Paley. Philo had used the metaphor of the watch, arguing that pieces of metal could ‘never arrange themselves so as to compose a watch’. Though well aware of Philo’s objections, Paley still inferred a mind behind the watch on the heath. It was not that the watch was made of components, or that it was close to perfect in its design, or that it was incomprehensible – arguments that had appealed to a previous generation of physicists and that Hume had answered. It was that it was clearly designed to do a job, not individually and recently but once and originally in an ancestor. Switching metaphors, Paley asserted that ‘there is precisely the same proof that the eye was made for vision, as there is that the telescope was made for assisting it’. The eyes of animals that live in water have a more curved surface than the eyes of animals that live on land, he pointed out, as befits the different refractive indices of the two elements: organs are adapted to the natural laws of the world, rather than vice versa.

But if God is omnipotent, why does he need to design eyes at all? Why not just give animals a magic power of vision without an organ? Paley had an answer of sorts. God could have done ‘without the intervention of instruments or means: but it is in the construction of instruments, in the choice and adaptation of means, that a creative intelligence is seen’. God has been pleased to work within the laws of physics, so that we can have the pleasure of understanding them. In this way, Paley’s modern apologists argue, God cannot be contradicted by the subsequent discovery of evolution by natural selection. He’d put that in place too to cheer us up by discovering it.

Paley’s argument boils down to this: the more spontaneous mechanisms you discover to explain the world of living things, the more convinced you should be that there is an intelligence behind them. Confronted with such a logical contortion, I am reminded of one of the John Cleese characters in Monty Python’s Life of Brian, when Brian denies that he is the Messiah: ‘Only the true Messiah denies his divinity.’

Darwin on the eye

Nearly six decades after Paley’s book, Charles Darwin’s produced a comprehensive and devastating answer. Brick by brick, using insights from an Edinburgh education in bottom–up thinking, from a circumnavigation of the world collecting facts of stone and flesh, from a long period of meticulous observation and induction, he put together an astonishing theory: that the differential replication of competing creatures would produce cumulative complexity that fitted form to function without anybody ever comprehending the rationale in a mind. And thus was born one of the most corrosive concepts in all philosophy. Daniel Dennett in his book Darwin’s Dangerous Idea compares Darwinism to universal acid; it eats through every substance used to contain it. ‘The creationists who oppose Darwinism so bitterly are right about one thing: Darwin’s dangerous idea cuts much deeper into the fabric of our most fundamental beliefs than many of its sophisticated apologists have yet admitted, even to themselves.’

The beauty of Darwin’s explanation is that natural selection has far more power than any designer could ever call upon. It cannot know the future, but it has unrivalled access to information about the past. In the words of the evolutionary psychologists Leda Cosmides and John Tooby, natural selection surveys ‘the results of alternative designs operating in the real world, over millions of individuals, over thousands of generations, and weights alternatives by the statistical distribution of their consequences’. That makes it omniscient about what has worked in the recent past. It can overlook spurious and local results and avoid guesswork, inference or models: it is based on the statistical results of the actual lives of creatures in the actual range of environments they encounter.

One of the most perceptive summaries of Darwin’s argument was made by one of his fiercest critics. A man named Robert Mackenzie Beverley, writing in 1867, produced what he thought was a devastating demolition of the idea of natural selection. Absolute ignorance is the artificer, he pointed out, trying to take the place of absolute wisdom in creating the world. Or (and here Beverley’s fury drove him into capital letters), ‘IN ORDER TO MAKE A PERFECT AND BEAUTIFUL MACHINE, IT IS NOT REQUISITE TO KNOW HOW TO MAKE IT.’ To which Daniel Dennett, who is fond of this quotation, replies: yes, indeed! That is the essence of Darwin’s idea: that beautiful and intricate organisms can be made without anybody knowing how to make them. A century later, an economist named Leonard Reed in an essay called ‘I, Pencil’, made the point that this is also true of technology. It is indeed the case that in order to make a perfect and beautiful machine, it is not requisite to know how to make it. Among the myriad people who contribute to the manufacture of a simple pencil, from graphite miners and lumberjacks to assembly-line workers and managers, not to mention those who grow the coffee that each of these drinks, there is not one person who knows how to make a pencil from scratch. The knowledge is held in the cloud, between brains, rather than in any individual head. This is one of the reasons, I shall argue in a later chapter, that technology evolves too.

Charles Darwin’s dangerous idea was to take away the notion of intentional design from biology altogether and replace it with a mechanism that builds ‘organized complexity … out of primeval simplicity’ (in Richard Dawkins’s words). Structure and function emerge bit by incremental bit and without resort to a goal of any kind. It’s ‘a process that was as patient as it was mindless’ (Dennett). No creature ever set out mentally intending to see, yet the eye emerged as a means by which animals could see. There is indeed an adapted purposefulness in nature – it makes good sense to say that eyes have a function – but we simply lack the language to describe function that emerged from a backward-looking process, rather than a goal-directed, forward-looking, mind-first one. Eyes evolved, Darwin said, because in the past simple eyes that provided a bit of vision helped the survival and reproduction of their possessors, not because there was some intention on the part of somebody to achieve vision. All our functional phrases are top–down ones. The eye is ‘for seeing’, eyes are there ‘so that’ we can see, seeing is to eyes as typing is to keyboards. The language and its metaphors still imply skyhooks.

Darwin confessed that the evolution of the eye was indeed a hard problem. In 1860 he wrote to the American botanist Asa Gray: ‘The eye to this day gives me a cold shudder, but when I think of the fine known gradation my reason tells me I ought to conquer the odd shudder.’ In 1871 in his Descent of Man, he wrote: ‘To suppose that the eye with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chromatic aberration, could have been formed by natural selection, seems, I freely confess, absurd in the highest degree.’

But he then went on to set out how he justified the absurdity. First, the same could have been said of Copernicus. Common sense said the world stood still while the sun turned round it. Then he laid out how an eye could have emerged from nothing, step by step. He invoked ‘numerous gradations’ from a simple and imperfect eye to a complex one, ‘each grade being useful to its possessor’. If such grades could be found among living animals, and they could, then there was no reason to reject natural selection, ‘though insuperable by our imagination’. He had said something similar twenty-seven years before in his first, unpublished essay on natural selection: that the eye ‘may possibly have been acquired by gradual selection of slight but in each case useful deviations’. To which his sceptical wife Emma had replied, in the margin: ‘A great assumption’.

Pax optica

This is exactly what happened, we now know. Each grade was indeed useful to its possessor, because each grade still exists and still is useful to its owner. Each type of eye is just a slight improvement on the one before. A light-sensitive patch on the skin enables a limpet to tell which way is up; a light-sensitive cup enables a species called a slit-shelled mollusc to tell which direction the light is coming from; a pinhole chamber of light-sensitive cells enables the nautilus to focus a simple image of the world in good light; a simple lensed eye enables a murex snail to form an image even in low light; and an adjustable lens with an iris to control the aperture enables an octopus to perceive the world in glorious detail (the invention of the lens is easily explained, because any transparent tissue in the eye would have acted as partial refractor). Thus even just within the molluscs, every stage of the eye still exists, useful to each owner. How easy then to imagine each stage having existed in the ancestors of the octopus.

Richard Dawkins compares the progression through these grades to climbing a mountain (Mount Improbable) and at no point encountering a slope too steep to surmount. Mountains must be climbed from the bottom up. He shows that there are numerous such mountains – different kinds of eyes in different kinds of animal, from the compound eyes of insects to the multiple and peculiar eyes of spiders – each with a distinct range of partially developed stages showing how one can go step by step. Computer models confirm that there is nothing to suggest any of the stages would confer a disadvantage.

Moreover, the digitisation of biology since the discovery of DNA provides direct and unambiguous evidence of gradual evolution by the progressive alteration of the sequence of letters in genes. We now know that the very same gene, called Pax6, triggers the development of both the compound eye of insects and the simple eye of human beings. The two kinds of eye were inherited from a common ancestor. A version of a Pax gene also directs the development of simple eyes in jellyfish. The ‘opsin’ protein molecules that react to light in the eye can be traced back to the common ancestor of all animals except sponges. Around 700 million years ago, the gene for opsin was duplicated twice to give the three kinds of light-sensitive molecules we possess today. Thus every stage in the evolution of eyes, from the development of light-sensitive molecules to the emergence of lenses and colour vision, can be read directly from the language of the genes. Never has a hard problem in science been so comprehensively and emphatically solved as Darwin’s eye dilemma. Shudder no more, Charles.

Astronomical improbability?

The evidence for gradual, undirected emergence of the opsin molecule by the stepwise alteration of the digital DNA language is strong. But there remains a mathematical objection. The opsin molecule is composed of hundreds of amino acids in a sequence specified by the appropriate gene. If one were to arrive at the appropriate sequence to give opsin its light-detecting properties by trial and error it would take either a very long time or a very large laboratory. Given that there are twenty types of amino acid, then a protein molecule with a hundred amino acids in its chain can exist in 10 to the power of 130 different sequences. That’s a number far greater than the number of atoms in the universe, and far greater than the number of nanoseconds since the Big Bang. So it’s just not possible for natural selection, however many organisms it has to play with for however long, to arrive at a design for an opsin molecule from scratch. And an opsin is just one of tens of thousands of proteins in the body.

Am I heading for a Lucretian swerve? Will I be forced to concede that the combinatorial vastness of the library of possible proteins makes it impossible for evolution to find ones that work? Far from it. We know that human innovation rarely designs things from scratch, but jumps from one technology to the ‘adjacent possible’ technology, recombining existing features. So it is taking small, incremental steps. And we know that the same is true of natural selection. So the mathematics is misleading. In a commonly used analogy, you are not assembling a Boeing 747 with a whirlwind in a scrapyard, you are adding one last rivet to an existing design. And here there has been a remarkable recent discovery that makes natural selection’s task much easier.

In a laboratory in Zürich a few years ago, Andreas Wagner asked his student João Rodriguez to use a gigantic assembly of computers to work his way through a map of different metabolic networks to see how far he could get by changing just one step at a time. He chose the glucose system in a common gut bacterium, and his task was to change one link in the whole metabolic chain in such a way that it still worked – that the creature could still make sixty or so bodily ingredients from this one sugar. How far could he get? In species other than the gut bacterium there are thousands of different glucose pathways. How many of them are just a single step different from each other? Rodriguez found he got 80 per cent of the way through a library of a thousand different metabolic pathways at his first attempt, never having to change more than one step at a time and never producing a metabolic pathway that did not work. ‘When João showed me the answer, my first reaction was disbelief,’ wrote Wagner. ‘Worried that this might be a fluke, I asked João for many more random walks, a thousand more, each preserving metabolic meaning, each leading as far as possible, each leaving in a different direction.’ Same result.

Wagner and Rodriguez had stumbled upon a massive redundancy built into the biochemistry of bacteria – and people. Using the metaphor of a ‘Library of Mendel’, in which imaginary building are stored the unimaginably vast number of all possible genetic sequences, Wagner identified a surprising pattern. ‘The metabolic library is packed to its rafters with books that tell the same story in different ways,’ he writes. ‘Myriad metabolic texts with the same meaning raise the odds of finding any one of them – myriad-fold. Even better, evolution does not just explore the metabolic library like a single casual browser. It crowdsources, employing huge populations of organisms that scour the library for new texts.’ Organisms are crowds of readers going through the Library of Mendel to find texts that make sense.

Wagner points out that biological innovation must be both conservative and progressive, because as it redesigns the body, it cannot ever produce a non-functional organism. Turning microbes into mammals over millions of years is a bit like flying the Atlantic while rebuilding the plane to a new design. The globin molecule, for example, has roughly the same three-dimensional shape and roughly the same function in plants and insects, but the sequences of amino acids in the two are 90 per cent different.