скачать книгу бесплатно
We’ve been a social species, whose survival has depended upon human cooperation, for hundreds of thousands of years. But over the last 1,000 generations it’s been argued (#litres_trial_promo) that these social instincts have been rapidly honed and strengthened. This ‘sharp acceleration’ of selection for social traits, writes developmental psychologist Professor Bruce Hood, has left us with brains that are ‘exquisitely engineered to interact with other brains’.
For earlier humans that roamed hostile environments, aggression and physicality had been critical. But the more cooperative we became, the less useful these traits proved. When we started living in settled communities, they grew especially troublesome. There, it would’ve been the people who were better at getting along with others, rather than the physically dominant, who’d have been more successful.
This success in the community would’ve meant greater reproductive success, which would’ve gradually led to the emergence of a new strain of human. These humans had thinner and weaker bones than their ancestors and greatly reduced muscle mass, their physical strength as much as halving (#litres_trial_promo). They also had the kind of brain chemistry and hormones that predisposed them to behaviour specialised for settled communal living. They’d have been less interpersonally aggressive, but more adept at the kind of psychological manipulation necessary for negotiating, trading and diplomacy. They’d become expert at controlling their environment of other human minds.
You might compare it to the difference between a wolf and a dog. A wolf survives by cooperating as well as fighting for dominance and killing prey. A dog does so by manipulating its human owner such that they’d do anything for them. The power my beloved labradoodle Parker has over my own brain is frankly embarrassing. (I’ve dedicated this bloody book to her.) In fact, this might be more than a mere analogy. Researchers such as Hood argue that modern humans, just like dogs, have gone through a process of domestication. Support for the idea comes partly from the fact that, over the last 20,000 years, our brains have shrunk by between ten and fifteen per cent, the same reduction that’s been observed in all the thirty or so other animals that humans have domesticated. Just as with those creatures, our domestication means we’re tamer than our ancestors, better at reading social signals and more dependent on others. But, writes Hood, ‘no other animal has taken domestication to the extent that we have.’ Our brains may have initially evolved to ‘cope with a potentially threatening world of predators, limited food and adverse weather, but we now rely on it to navigate an equally unpredictable social landscape.’
Unpredictable humans. This is the stuff of story.
For modern humans, controlling the world means controlling other people, and that means understanding them. We’re wired to be fascinated by others and get valuable information from their faces. This fascination begins almost immediately. Whereas ape and monkey parents (#litres_trial_promo) spend almost no time looking at their babies’ faces, we’re helplessly drawn to them. Newborns are attracted to human faces more than to any other object (#litres_trial_promo) and, one hour from birth, begin imitating them (#litres_trial_promo). By two, they’ve learned to control their social worlds by smiling (#litres_trial_promo). By the time they’re adults, they’ve become so adept at reading people that they’re making calculations about status and character automatically, in one tenth of a second (#litres_trial_promo). The evolution of our strange, extremely other-obsessed brains has brought with it weird side-effects. Human obsession with faces is so fierce we see them almost anywhere: in fire; in clouds; down spooky corridors; in toast.
We sense minds everywhere too. Just as the brain models the outside world it also builds models of minds. This skill, which is an essential weapon in our social armoury, is known as ‘theory of mind’. It enables us to imagine what others are thinking, feeling and plotting, even when they’re not present. We can experience the world from another’s perspective. For the psychologist Professor Nicholas Epley this capacity, which is obviously essential for storytelling, gave us incredible power. ‘Our species has conquered the Earth because (#litres_trial_promo) of our ability to understand the minds of others,’ he writes, ‘not because of our opposable thumbs or handiness with tools.’ We develop this skill at around the age of four. It’s then that we become story-ready; equipped to understand the logic of narrative.
The human ability to populate our minds with imagined other minds is the start of religion. Shamans in hunter-gatherer tribes would enter trance states and interact with spirits, and use these interactions as attempts to control the world. Religions were also typically animistic: our storytelling brains would project human-like minds into trees, rocks, mountains and animals, imagining they were possessed by gods who were responsible for changeful events, and required controlling with ritual and sacrifice.
Childhood stories reflect our natural tendency for such hyperactive mind-detecting. In fairytales, human-like minds are everywhere: mirrors talk, pigs eat breakfast, frogs turn into princes. Youngsters naturally treat their dolls and teddies as if they’re inhabited by selves. I remember feeling terrible guilt for preferring my pink bear, handmade by my Grandmother, to my shop-bought brown bear. I knew they both knew how I felt, and that left me distracted and sad.
We never really grow out of our inherent animism. Which one of us hasn’t kicked a door that’s slammed on our fingers believing, in that disorientating flash of pain, that it attacked us out of spite? Who among us hasn’t told a self-assembly wardrobe to fuck off? Whose storytelling brain doesn’t commit its own literary-style pathetic fallacy, allowing the sun to make them optimistic about the coming day or the brooding clouds pessimistic? Studies indicate that those who anthropomorphise (#litres_trial_promo) a human personality onto their cars show less interest in trading them. Bankers project human moods (#litres_trial_promo) onto the movements of the markets and place their trades accordingly.
When we’re reading, hearing or watching a story we deploy our theory-of-mind skills by automatically making hallucinatory models of the minds of its characters. Some authors model the minds of their own characters with such force that they hear them talk. Charles Dickens, William Blake and Joseph Conrad all spoke of (#litres_trial_promo) such extraordinary experiences. The novelist and psychologist Professor Charles Fernyhough (#litres_trial_promo) has led research in which 19 per cent of ordinary readers reported hearing the voices of fictional characters even after they’d put their books down. Some reported a kind of literary possession, with the character influencing the tone and nature of their thoughts.
But much as humans excel at such feats of theory of mind, we also tend to dramatically overestimate our abilities. Although there’s an admitted absurdity in claiming to be able to quantify human behaviour with such absolute numerical precision, some research suggests strangers read another’s thoughts (#litres_trial_promo) and feelings with an accuracy of just 20 per cent. Friends and lovers? A mere 35 per cent. Our errors about what others are thinking are a major cause of human drama. As we move through life, wrongly predicting what people are thinking and how they’ll react when we try to control them, we haplessly trigger feuds and fights and misunderstandings that fire devastating spirals of unexpected change into our social worlds.
Comedy, whether by William Shakespeare or John Cleese and Connie Booth, is often built on such mistakes. But whatever the mode of storytelling, well-imagined characters always have theories about the minds of other characters and – because this is drama – those theories will often be wrong.This wrongness will lead to unexpected consequences and yet more drama. The influential post-war director Alexander Mackendrick writes, ‘I start by asking (#litres_trial_promo): What does A think B is thinking about A? It sounds complicated (and it is) but this is the very essence of giving some density to a character and, in turn, a scene.’
The author Richard Yates uses a theory-of-mind mistake to create a pivotal moment of drama in his classic Revolutionary Road. The novel charts the dissolving marriage of Frank and April Wheeler. When they were young, and newly in love, Frank and April dreamed of bohemian lives in Paris. But, when we meet them, middle-aged reality has struck. Frank and April have two children, with a third on the way, and have moved into a cookie-cutter suburb. Frank’s secured a job at his father’s old company and has found himself rather settling into a life of boozy lunches and housewife-at-home ease. But April isn’t happy. She still dreams of Paris. They argue, bitterly. Sex is withheld. Frank sleeps with a girl at work. And then he makes his theory-of-mind mistake.
In order to break the impasse with his wife, Frank decides to confess his infidelity. His theory of April’s mind appears to be that she’ll be thrown into a state of catharsis that will jolt her back into reality. There’ll be tears to mop up, sure, but those tears will just remind the ol’ gal why she loves him.
This is not what happens. When he confesses, April asks, Why? Not why he slept with the girl, but why is he bothering to tell her? She doesn’t care about his fling. This isn’t what Frank was expecting at all. He wants her to care! ‘I know you do,’ April tells him. ‘And I suppose I would, if I loved you; but you see I don’t. I don’t love you and I never really have and I never really figured it out until this week.’
1.6 (#ulink_460951b8-1cd9-5554-af34-d052f05473fe)
As the eye darts about, building up its story world for you to live inside, the brain’s choosy about where it tells it to look. We’re attracted to change, of course, but also to other salient details. Scientists used to believe attention was drawn simply to objects that stood out, but recent research suggests we’re more likely to attend to (#litres_trial_promo) that which we find meaningful. Unfortunately, it’s not yet known precisely what ‘meaningful’ means, in this context, but tests that tracked saccades found, for example, that an untidy shelf attracted more attention than a sun-splashed wall. For me, that untidy shelf hints of human change; of a life in detail; of trouble insinuating itself in a place designed for order. It’s no surprise test-brains were drawn to it. It’s story-stuff, whilst the sun is just a shrug.
Storytellers also choose carefully what meaningful details to show and when. In Revolutionary Road, just after Frank makes his changeful theory-of-mind mistake that throws his life in a new and unexpected direction, the author draws our attention to one brilliant detail. It’s an urgent voice on the radio: ‘And listen to this. Now, during the Fall Clearance, you’ll find Robert Hall’s entire stock of men’s walk shorts and sport jeans drastically reduced!’
Both believable and crushing, it serves to intensify our feelings, at exactly the right moment, of the suffocating and dreary housewifey corner that April has found herself backed into. Its timing also implicitly defines and condemns what Frank has become. He used to think he was bohemian – a thinker! – and now he’s just Bargain Shorts Man. This is an advert for him.
The director Stephen Spielberg is famous for his use of salient detail to create drama. In Jurassic Park, during a scene that builds to our first sighting of Tyrannosaurusrex, we see two cups of water on a car dashboard, deep rumbles from the ground sending rings over their liquid surface. We cut between the faces of the passengers, each slowly registering change. Then we see the rear-view mirror vibrating with the stomping of the beast. Extra details like this add even more tension by mimicking the way brains process peak moments of stress. When we realise our car is about to crash, say, the brain needs to temporarily increase its ability to control the world. Its processing power surges and we become aware of more features in our environment, which has the effect of making time seem to slow down. In exactly this way, storytellers stretch time, and thereby build suspense, by packing in extra saccadic moments and detail.
1.7 (#ulink_b54c082b-01ce-50c6-a302-0bae76ea8f68)
There’s a park bench, in my hometown, that I don’t like to walk past because it’s haunted by a breakup with my first love. I see ghosts on that bench that are invisible to anyone else except, perhaps, her. And I feel them too. Just as human worlds are haunted with minds and faces, they’re haunted with memories. We think of the act of ‘seeing’ as the simple detection of colour, movement and shape. But we see with our pasts.
That hallucinatory neural model of the world we live inside is made up of smaller, individual models – we have neural models of park benches, dinosaurs, ISIS, ice cream, models of everything – and each of those is packed with associations from our own personal histories. We see both the thing itself and all that we associate with it. We feel it too. Everything our attention rests upon triggers a sensation, most of which are minutely subtle and experienced beneath the level of conscious awareness. These feelings flicker and die so rapidly that they precede conscious thought, and thereby influence it. All these feelings reduce to just two impulses: advance and withdraw. As you scan any scene, then, you’re in a storm of feeling; positive and negative sensations from the objects you see fall over you like fine drops of rain. This understanding is the beginning of creating a compelling and original character on the page. A character in fiction, like a character in life, inhabits their own unique hallucinated world in which everything they see and touch comes with its own unique personal meaning.
These worlds of feeling are a result of the way our brains encode the environment. The models we have of everything are stored in the form of neural networks. When our attention rests upon a glass of red wine, say, a large number of neurons in different parts of the brain are simultaneously activated. We don’t have a specific ‘glass of wine’ area that lights up, what we have are responses to ‘liquid’, ‘red’, ‘shiny surface’, ‘transparent surface’, and so on. When enough of these are triggered, the brain understands what’s in front of it and constructs the glass of wine for us to ‘see’.
But these neural activations aren’t limited to mere descriptions of appearance. When we detect the glass of wine, other associations also flash into being: bitter-sweet flavours; vineyards; grapes; French culture; dark marks on white carpets; your road-trip to the Barossa Valley; the last time you got drunk and made a fool of yourself; the first time you got drunk and made a fool of yourself; the breath of the woman who attacked you. These associations have powerful effects on our perception. Research shows that when we drink (#litres_trial_promo) wine our beliefs about its quality and price change our actual experience of its taste. The way food is described (#litres_trial_promo) has a similar effect.
It’s just such associative thinking that gives poetry its power. A successful poem plays on our associative networks as a harpist plays on strings. By the meticulous placing of a few simple words, they brush gently against deeply buried memories, emotions, joys and traumas, which are stored in the form of neural networks that light up as we read. In this way, poets ring out rich chords of meaning that resonate so profoundly we struggle to fully explain why they’re moving us so.
Alice Walker’s ‘Burial’ describes the poet bringing her child to the cemetery in Eatonton, Georgia, in which several generations of her family are interred. She describes her grandmother resting
undisturbed
beneath the Georgia sun,
above her the neatstepping hooves
of cattle
and graves that ‘drop open without warning’ and
cover themselves with wild ivy
blackberries. Bittersweet and sage.
No one knows why. No one asks.
When I read ‘Burial’ for the first time, the lines at the end of this stanza made little logical sense to me, and yet I immediately found them beautiful, memorable and sad:
Forgetful of geographic resolutions as birds
the far-flung young fly South to bury
the old dead.
It’s these same associative processes that allow us to think metaphorically. Analyses of language reveal the extraordinary fact that we use around one metaphor for every ten seconds of speech (#litres_trial_promo) or written word. If that sounds like too much, it’s because you’re so used to thinking metaphorically – to speaking of ideas that are ‘conceived’ or rain that is ‘driving’ or rage that is ‘burning’ or people who are ‘dicks’. Our models are not only haunted by ourselves, then, but also by properties of other things. In her 1930 essay ‘Street Haunting’ Virginia Woolf employs several subtle metaphors over the course of a single gorgeous sentence:
How beautiful a London street is then, with its islands of lights, and its long groves of darkness, and on the side of it perhaps some tree-sprinkled, grass-grown space where night is folding herself to sleep naturally and, as one passes the iron railing, one hears those little cracklings and stirrings of leaf and twig which seem to suppose the silence of fields all around them, an owl hooting, and far away the rattle of the train in the valley.
Neuroscientists are building a powerful case (#litres_trial_promo) that metaphor is far more important to human cognition than has ever been imagined. Many argue it’s the fundamental way that brains understand abstract concepts, such as love, joy, society and economy. It’s simply not possible to comprehend these ideas in any useful sense, then, without attaching them to concepts that have physical properties: things that bloom and warm and stretch and shrink.
Metaphor (and its close sibling, the simile) tends to work on the page in one of two ways. Take this example, from Michael Cunningham’s A Home at the End of the World: ‘She washed old plastic bags and hung them on the line to dry, a string of thrifty tame jellyfish floating in the sun.’ This metaphor works principally by opening an information gap. It asks the brain a question: how can a plastic bag be a jellyfish? To find the answer, we imagine the scene. Cunningham has nudged us into more vividly modelling his story.
In Gone with the Wind, Margaret Mitchell uses metaphor to make not a visual point, but a conceptual one: ‘The very mystery of him excited her curiosity like a door that had neither lock nor key.’
In The Big Sleep, metaphor enables Raymond Chandler to pack a tonne of meaning into just seven words: ‘Dead men are heavier than broken hearts.’
Brain scans illustrate the second, more powerful, use of metaphor. When participants in one study read the words ‘he had a rough day’ (#litres_trial_promo), their neural regions involved in feeling textures became more activated, compared with those who read ‘he had a bad day’. In another, those who read ‘she shouldered the burden’ (#litres_trial_promo) had neural regions associated with bodily movement activated more than when they read ‘she carried the burden’. This is prose writing that deploys the weapons of poetry. It works because it activates extra neural models that give the language additional meaning and sensation. We feel the heft and strain of the shouldering, we touch the abrasiveness of the day.
Such an effect is exploited by Graham Greene in The Quiet American. Here, a protagonist with a broken leg is receiving unwanted help from his antagonist: ‘I tried to move away from him and take my own weight, but the pain came roaring back like a train in a tunnel.’ This finely judged metaphor is enough to make you wince. You can almost feel the neural networks firing up and borrowing greedily from each other: the tender limb; the snapped bone; the pain in all its velocity and unstoppableness and thunder, roaring up the tunnel of the leg.
In The God of Small Things, Arundhati Roy uses metaphorical language to sensual effect when describing a love scene between the characters Ammu and Valutha: ‘She could feel herself through him. Her skin. The way her body existed only where he touched her. The rest of her was smoke.’
And here the eighteenth-century writer and critic Denis Diderot uses a one-two of perfectly contrasting similes to smack his point home: ‘Libertines are hideous spiders, that often catch pretty butterflies.’
Metaphor and simile can be used to create mood. In Karl Ove Knausgaard’s A Death in the Family, the narrator describes stepping outside for a cigarette break, in the midst of clearing out the house of his recently deceased father. There he sees, ‘plastic bottles lying on their sides on the brick floor dotted with raindrops. The bottlenecks reminded me of muzzles, as if they were small cannons with their barrels pointing in all directions.’ Knausgaard’s choice of language adds to the general deathly, angry aura of the passage by flicking unexpectedly at the reader’s models of guns.
Descriptive masters such as Charles Dickens manage to hit our associative models again and again, creating wonderful crescendos of meaning, with the use of extended metaphors. Here he is, at the peak of his powers, introducing us to Ebenezer Scrooge in A Christmas Carol.
The cold within him froze his old features, nipped his pointed nose, shrivelled his cheek, stiffened his gait; made his eyes red, his thin lips blue; and spoke out shrewdly in his grating voice. A frosty rime was on his head, and on his eyebrows, and his wiry chin. He carried his own low temperature always about with him; he iced his office in the dog-days; and didn’t thaw it one degree at Christmas. External heat and cold had little influence on Scrooge. No warmth could warm, nor wintry weather chill him. No wind that blew was bitterer than he, no falling snow was more intent upon its purpose, no pelting rain less open to entreaty.
The author and journalist George Orwell knew the recipe for a potent metaphor. In the totalitarian milieu of his novel Nineteen Eighty-Four, he describes the small room in which the protagonist Winston and his partner Julia could be themselves without the state spying on them as ‘a world, a pocket of the past where extinct animals could walk.’
It won’t come as much of a surprise to discover (#litres_trial_promo) the interminably correct Orwell was even right when he wrote about writing. ‘A newly invented metaphor assists thought by evoking a visual image,’ he suggested, in 1946, before warning against the use of that ‘huge dump of worn-out metaphors which have lost all evocative power and are merely used because they save people the trouble of inventing phrases for themselves.’
Researchers recently tested this idea that clichéd metaphors (#litres_trial_promo) become ‘worn-out’ by overuse. They scanned people reading sentences that included action-based metaphors (‘they grasped the idea’), some of which were well-worn and others fresh. ‘The more familiar the expression, the less it activated the motor system,’ writes the neuroscientist Professor Benjamin Bergen. ‘In other words, over their careers, metaphorical expressions come to be less and less vivid, less vibrant, at least as measured by how much they drive metaphorical simulations.’
1.8 (#ulink_9d05ef65-b069-5949-a9d2-e2ee8af35c71)
In a classic 1932 experiment, the psychologist Frederic Bartlett (#litres_trial_promo) read a traditional Native American story to participants and asked them to retell it, by memory, at various intervals. The War of the Ghosts was a brief, 330-word tale about a boy who was reluctantly compelled to join a war party. During the battle, a warrior warned the boy that he had been shot. But, looking down, the boy couldn’t see any wounds on his body. The boy concluded that all the warriors were actually just ghosts. The next morning the boy’s face contorted, something black came out of his mouth, and he dropped down dead.
The War of the Ghosts had various characteristics that were unusual, at least for the study’s English participants. When they recalled the tale over time, Bartlett found their brains did something interesting. They simplified and formalised the story, making it more familiar by altering much of its ‘surprising, jerky and inconsequential’ qualities. They removed bits, added other bits and reordered still more. ‘Whenever anything appeared incomprehensible, it was either omitted or explained,’ in much the same way that an editor might fix a confusing story.
Turning the confusing and random into a comprehensible story is an essential function of the storytelling brain. We’re surrounded by a tumult of often chaotic information. In order to help us feel in control, brains radically simplify the world with narrative. Estimates vary, but it’s believed the brain processes around 11 million bits (#litres_trial_promo) of information at any given moment, but makes us consciously aware of no more than forty (#litres_trial_promo). The brain sorts through an abundance of information and decides what salient information to include in its stream of consciousness.
There’s a chance you’ve been made aware of these processes when, in a crowded room, you’ve suddenly heard someone in a distant corner speaking your name. This experience suggests the brain’s been monitoring myriad conversations and has decided to alert you to the one that might prove salient to your wellbeing. It’s constructing your story for you: sifting through the confusion of information that surrounds you, and showing you only what counts. This use of narrative to simplify the complex is also true of memory. Human memory is ‘episodic’ (we tend to experience our messy pasts as a highly simplified sequences of causes and effects) and ‘autobiographical’ (those connected episodes are imbued with personal and moral meaning).
There’s no single part of the brain that’s responsible for such story making. While most areas have specialisms, brain activity is far more dispersed than scientists once thought. That said, we wouldn’t be the storytellers we are if it wasn’t for its most recently evolved region, the neocortex. It’s a thin layer, about the depth of a shirt collar, folded in such a way that fully three feet of it is packed into a layer beneath your forehead. One of its critical jobs is keeping track of our social worlds. It helps interpret physical gestures, facial expressions and supports theory of mind.
But the neocortex is more than just a people-processor. It’s also responsible for complex thought, including planning, reasoning and making lateral connections. When the psychologist Professor Timothy Wilson writes that one of the main differences between us and other animals is that we have a brain that’s expert at constructing ‘elaborate theories and explanations about what is happening in the world and why,’ he’s talking principally about the neocortex.
These theories and explanations often take the form of stories. One of the earliest we know of tells of a bear being chased by three hunters. The bear is hit. It bleeds over the leaves on the forest floor, leaving behind it all the colours of autumn, then manages to escape by climbing up a mountain and leaping into the sky, where it becomes the constellation Ursa Major. Versions of the ‘Cosmic Hunt’ myth (#litres_trial_promo) have been found in Ancient Greece, northern Europe, Siberia, and in the Americas, where this particular one was told by the Iroquois Indians. Because of this pattern of spread, it’s believed it was being told when there was a land bridge between what’s now Alaska and Russia. That dates it between 13,000 and 28,000 BC.
The Cosmic Hunt myth reads like a classic piece of human bullshit. Perhaps it originated in a dream or shamanistic vision. But, just as likely, it started when someone, at some point, asked someone else, ‘Hey, why do those stars look like a bear?’ And that person gave a sage-like sigh, leaned on a branch and said, ‘Well, it’s funny you should ask …’ And here we are, 20,000 years later, still telling it.
When posed with even the deepest questions about reality, human brains tend towards story. What is a modern religion if not an elaborate neocortical ‘theory and explanation about what’s happening in the world and why’? Religion doesn’t merely seek to explain the origins of life, it’s our answer to the most profound questions of all: What is good? What is evil? What do I do about all my love, guilt, hate, lust, envy, fear, mourning and rage? Does anybody love me? What happens when I die? The answers don’t naturally emerge as data or an equation. Rather, they typically have a beginning, a middle and an end and feature characters with wills, some of them heroic, some villainous, all co-starring in a dramatic, changeful plot built from unexpected events that have meaning.
To understand the basis of how the brain turns the superabundance of information that surrounds it into a simplified story is to understand a critical rule of storytelling. Brain stories have a basic structure of cause and effect. Whether it’s memory, religion, or the War of the Ghosts, it rebuilds the confusion of reality into simplified theories of how one thing causes another. Cause and effect is a fundamental of how we understand the world. The brain can’t help but make cause and effect connections. It’s automatic. We can test it now. BANANAS. VOMIT (#litres_trial_promo). Here’s the psychologist Professor Daniel Kahneman describing what just happened in your brain: ‘There was no particular reason to do so, but your mind automatically assumed a temporal sequence and a causal connection between the words bananas and vomit, forming a sketchy scenario in which bananas caused the sickness.’
As Kahneman’s test shows, the brain makes cause and effect connections even where there are none. The power of this cause and effect story-making was explored in the early twentieth century by the Soviet filmmakers (#litres_trial_promo) Vsevolod Pudovkin and Lev Kuleshov, who juxtaposed film of a famous actor’s expressionless face with stock footage of a bowl of soup, a dead woman in a coffin and a girl playing with a toy bear. They then showed each juxtaposition to an audience. ‘The result was terrific,’ recalled Pudovkin. ‘The public raved about the acting of the artist. They pointed out the heavy pensiveness of his mood over the forgotten soup, were touched and moved by the deep sorrow with which he looked on the dead woman, and admired the light, happy smile with which he surveyed the girl at play. But we knew that in all three cases the face was exactly the same.’
Subsequent experiments confirmed the filmmakers’ findings. When shown cartoons of simple moving shapes, viewers helplessly inferred animism and built cause-and-effect narratives about what was happening: this ball is bullying that one; this triangle is attacking this line, and so on. When presented with discs moving randomly on a screen, viewers imputed chase sequences where there were none.
Cause and effect is the natural language of the brain. It’s how it understands and explains the world. Compelling stories are structured as chains of causes and effects. A secret of bestselling page-turners and blockbusting scripts is their relentless adherence to forward motion, one thing leading directly to another. In 2005, the Pulitzer prizewinning playwright David Mamet was captaining a TV drama called The Unit. After becoming frustrated with his writers producing scenes with no cause and effect – that were, for instance, simply there to deliver expository information – he sent out an angry ALL CAPS memo, which leaked online (I’ve de-capped what follows to save your ears): ‘Any scene which does not both advance the plot and standalone (that is, dramatically, by itself, on its own merits) is either superfluous or incorrectly written,’ he wrote. ‘Start, every time, with this inviolable rule: the scene must be dramatic. It must start because the hero has a problem, and it must culminate with the hero finding him or herself either thwarted or educated that another way exists.’
The issue isn’t simply that scenes without cause and effect tend to be boring. Plots that play too loose with cause and effect risk becoming confusing, because they’re not speaking in the brain’s language. This is what the screenwriter of The Devil Wears Prada, Aline Brosh McKenna, suggested when she said, ‘You want all your scenes to have a “because” between (#litres_trial_promo) them, and not an “and then”.’ Brains struggle with ‘and then’. When one thing happens over here, and then we’re with a woman in a car park who’s just witnessed a stabbing, and then there’s a rat in Mothercare in 1977, and then there’s an old man singing sea shanties in a haunted pear orchard, the writer is asking a lot of people.
But sometimes this is on purpose. An essential difference between commercial and literary storytelling is its use of cause and effect. Change in mass-market story is quick and clear and easily understandable, while in high literature it’s often slow and ambiguous and demands plenty of work from the reader, who has to ponder and de-code the connections for themself. Novels such as Marcel Proust’s Swann’s Way are famously meandering and include, for example, a description of hawthorn blossom that lasts for well over a thousand words. (‘You are fond of hawthorns,’ one character remarks to the narrator, halfway through.) The art-house films of David Lynch are frequently referred to as ‘dreamlike’ because, like dreams, there’s often a dearth of logic to their cause and effect.
Those who enjoy such stories are more likely to be expert readers, those lucky enough to have been born with the right kinds of minds, and raised in learning environments that nurtured the skill of picking up the relatively sparse clues in meaning left by such storytellers. I also suspect they tend to be higher than average in the personality trait ‘openness to experience’, which strongly predicts an interest in poetry and the arts (#litres_trial_promo) (and also ‘contact with psychiatric services’). Expert readers understand that the patterns of change they’ll encounter in art-house films and literary or experimental fiction will be enigmatic and subtle, the causes and effects so ambiguous that they become a wonderful puzzle that stays with them months and even years after reading, ultimately becoming the source of meditation, re-analysis and debate with other readers and viewers – why did characters behave as they did? What was the filmmaker really saying?
But all storytellers, no matter who their intended audience, should beware of over-tightening their narratives. While it’s dangerous to leave readers feeling confused and abandoned, it’s just as risky to over-explain. Causes and effects should be shown rather than told; suggested rather than explained. Readers should be free to anticipate what’s coming next and able to insert their own feelings and interpretations into why that just happened and what it all means. These gaps in explanation are the places in story in which readers insert themselves: their preconceptions; their values; their memories; their connections; their emotions – all become an active part of the story. No writer can ever transplant their neural world perfectly into a reader’s mind. Rather, their two worlds mesh. Only by the reader insinuating themselves into a work can it create a resonance that has the power to shake them as only art can.
1.9 (#ulink_13efb68f-18f6-59f5-80cb-c0543a17d551)
So our mystery is solved. We’ve discovered where a story begins: with a moment of unexpected change, or with the opening of an information gap, or likely both. As it happens to a protagonist, it happens to the reader or viewer. Our powers of attention switch on. We typically follow the consequences of the dramatic change as they ripple out from the start of the story in a pattern of causes and effects whose logic will be just ambiguous enough to keep us curious and engaged. But while this is technically true, it’s actually only the shallowest of answers. There’s obviously more to storytelling than this rather mechanical process.
A similar observation is made by a story-maker near the start of Herman J. Mankiewicz and Orson Welles’s 1941 cinema classic Citizen Kane. The film opens with change and an information gap: the recent death of the mogul Charles Foster Kane, as he drops a glass globe that contains a little snow-covered house and utters a single, mysterious word: rosebud. We’re then presented with a newsreel that documents the raw facts of his seventy years of life: Kane was a well known yet controversial figure who was extraordinarily wealthy and once owned and edited the New York Daily Inquirer. His mother ran a boarding house and the family fortune came after a defaulting tenant left her a gold mine, the Colorado Lode, which had been assumed worthless. Kane was twice married, twice divorced, lost a son and made an unsuccessful attempt at entering politics, before dying a lonely death in his vast, unfinished and decaying palace that, we’re told, was, ‘since the pyramids, the costliest monument a man has built to himself’.
With the newsreel over, we meet its creators – a team of cigarette-smoking newsmen who, it turns out, have just finished their film and are showing it to their boss Rawlston for his editorial comments. And Rawlston is not satisfied. ‘It isn’t enough to tell us what a man did,’ he tells his team. ‘You’ve got to tell us who he was … How is he different from Ford? Or Hearst, for that matter? Or John Doe?’
That newsreel editor was right (as editors are with maddening regularity). We’re a hyper-social species with domesticated brains that have been engineered specifically to control an environment of humans. We’re insatiably inquisitive, beginning with our tens of thousands of childhood questions about how one thing causes another. Being a domesticated species, we’re most interested of all in the cause and effect of other people. We’re endlessly curious about them. What are they thinking? What are they plotting? Who do they love? Who do they hate? What are their secrets? What matters to them? Why does it matter? Are they an ally? Are they a threat? Why did they do that irrational, unpredictable, dangerous, incredible thing? What drove them to build ‘the world’s largest pleasure ground’ on top of a manmade ‘private mountain’ that contained the most populous zoo ‘since Noah’ and a ‘collection of everything so big it can never be catalogued’? Who is the person really? How did they become who they are?
Good stories are explorations of the human condition; thrilling voyages into foreign minds. They’re not so much about events that take place on the surface of the drama as they are about the characters that have to battle them. Those characters, when we meet them on page one, are never perfect. What arouses our curiosity about them, and provides them with a dramatic battle to fight, is not their achievements or their winning smile. It’s their flaws.
CHAPTER TWO: (#ulink_bb450f8f-bced-5262-aefa-23f7b75f609b)
THE FLAWED SELF (#ulink_bb450f8f-bced-5262-aefa-23f7b75f609b)
2.0 (#ulink_2d231d1c-a385-5862-ac34-e727d6f34cc7)
There’s something you should know about Mr B. He’s being watched by the FBI. They film him constantly and in secret, then cut the footage together and broadcast it to millions as ‘The Mr B Show’. This makes life rather awkward for Mr B. He showers in swimming trunks and dresses beneath bedsheets. He hates talking to others, as he knows they’re actors hired by the FBI to create drama. How can he trust them? He can’t trust anyone. No matter how many people explain why he’s wrong, he just can’t see it. He finds a way to dismiss each argument they present to him. He knows it’s true. He feels it’s true. He sees evidence for it everywhere.
There’s something else you should know about Mr B. He’s psychotic. One healthy part of his brain, writes the neuroscientist Professor Michael Gazzaniga (#litres_trial_promo), ‘is trying to make sense out of some abnormalities going on in another’. The malfunctioning part is causing ‘a conscious experience with very different contents than would normally be there, yet those contents are what constitute Mr B’s reality and provide experiences that his cognition must make sense of.’
Because it’s being warped by faulty signals being sent out by the unhealthy section of his brain, the story Mr B is telling about the world, and his place within it, is badly mistaken. It’s so mistaken he’s no longer able to adequately control his environment, so doctors and care staff have to do it on his behalf, in a psychiatric institution.
As unwell as he is, we’re all a bit like Mr B. The controlled hallucination inside the silent, black vault of our skulls that we experience as reality is warped by faulty information. But because this distorted reality is the only reality we know, we just can’t see where it’s gone wrong. When people plead with us that we’re mistaken or cruel and acting irrationally, we feel driven to find a way to dismiss each argument they present to us. We know we’re right. We feel we’re right. We see evidence for it everywhere.
These distortions in our cognition make us flawed. Everyone is flawed in their own interesting and individual ways. Our flaws make us who we are, helping to define our character. But our flaws also impair our ability to control the world. They harm us.
At the start of a story, we’ll often meet a protagonist who is flawed in some closely defined way. The mistakes they’re making about the world will help us empathise with them. We’ll warm to their vulnerability. We’ll become emotionally engaged in their struggle. When the dramatic events of the plot coax them to change we’ll root for them.
The problem is, in fiction and in life, changing who we are is hard. The insights we’ve learned from neuroscience and psychology begin to show us exactly why it’s hard. Our flaws – especially the mistakes we make about the human world and how to live successfully within it – are not simply ideas about this and that which we can identify easily and choose to shrug off. They’re built right into our hallucinated models. Our flaws form part of our perception, our experience of reality. This makes them largely invisible to us.
Correcting our flaws means, first of all, managing the task of actually seeing them. When challenged, we often respond by refusing to accept our flaws exist at all. People accuse us of being ‘in denial’. Of course we are: we literally can’t see them. When we can see them, they all too often appear not as flaws at all, but as virtues. The mythologist Joseph Campbell identified a common plot moment in which protagonists ‘refuse the call’ of the story. This is often why.
Identifying and accepting our flaws, and then changing who we are, means breaking down the very structure of our reality before rebuilding it in a new and improved form. This is not easy. It’s painful and disturbing. We’ll often fight with all we have to resist this kind of profound change. This is why we call those who manage it ‘heroes’.
There are various routes by which characters and selves become unique and uniquely flawed, and a basic understanding of them can be of great value to storytellers. One major route involves those moments of change. The brain constructs its hallucinated model (#litres_trial_promo) of the world by observing millions of instances of cause and effect then constructing its own theories and assumptions about how one thing caused the other. These micro-narratives of cause and effect – more commonly known as ‘beliefs’ – are the building blocks of our neural realm. The beliefs it’s built from feel personal to us because they help make up the world that we inhabit and our understanding of who we are. Our beliefs feel personal to us because they are us.
But many of them will be wrong. Of course the controlled hallucination we live inside is not as distorted as the one that Mr B lives inside. Nobody, however, is right about everything. Nevertheless, the storytelling brain wants to sell us the illusion that we are. Think about the people closest to you. There won’t be a soul among them with whom you’ve never disagreed. You know she’s slightly wrong about that, and he’s got that wrong, and don’t get her started on that. The further you travel from those you admire, the more wrong people become until the only conclusion you’re left with is that entire tranches of the human population are stupid, evil or insane. Which leaves you, the single living human who’s right about everything – the perfect point of light, clarity and genius who burns with godlike luminescence at the centre of the universe.
Hang on, that can’t be right. You must be wrong about something. So you go on a hunt. You count off your most precious beliefs – the ones that really matter to you – one by one. You’re not wrong about that and you’re not wrong about that and you’re certainly not wrong about that or that or that or that. The insidious thing about your biases, errors and prejudices is that they appear as real to you as Mr B’s delusions appear to him. It feels as if everyone else is ‘biased’ and it’s only you that sees reality as it actually is. Psychologists call this ‘naive realism’. Because reality seems clear and obvious and self-evident to you, those who claim to see it differently must be idiots or lying or morally derelict. The characters we meet at the start of story are, like most of us, living just like this – in a state of profound naivety about how partial and warped their hallucination of reality has become. They’re wrong. They don’t know they’re wrong. But they’re about to find out …
If we’re all a bit like Mr B then Mr B is, in turn, like the protagonist in Andrew Niccol’s screenplay, The Truman Show. It tells of thirty-year-old Truman Burbank, who’s come to believe his whole life is staged and controlled. But, unlike Mr B, he’s right. The Truman Show is not only real, it’s being broadcast, twenty-four hours a day, to millions. At one point, the show’s executive producer is asked why he thinks it’s taken Truman so long to become suspicious of the true nature of his world. ‘We accept the reality of the world with which we’re presented,’ he answers. ‘It’s as simple as that.’
Вы ознакомились с фрагментом книги.
Для бесплатного чтения открыта только часть текста.
Приобретайте полный текст книги у нашего партнера: