Endure: Mind, Body and the Curiously Elastic Limits of Human Performance
ñêà÷àòü êíèãó áåñïëàòíî
At the opposite end of the spectrum, even the greatest sprinters in the world fight against what John Smith, the coach of former 100-meter world-record holder Maurice Greene, euphemistically calls the “Negative Acceleration Phase.” The race may be over in ten seconds, but most sprinters hit their top speed after 50 to 60 meters, sustain it briefly, then start to fade. Usain Bolt’s ability to stride magisterially away from his competitors at the end of a race? A testament to his endurance: he’s slowing down a little less (or a little later) than everyone else. In Bolt’s 9.58-second world-record race at the 2009 World Championships in Berlin, his last 20 meters was five hundredths of a second slower than the previous 20 meters, but he still extended his lead over the rest of the field.
At the same world championships, Bolt went on to set the 200-meter world record with a time of 19.19 seconds. A crucial detail: he ran the first half of the race in 9.92 seconds—an amazing time, considering the 200 starts on a curve, but still slower than his 100-meter record. It’s barely perceptible, but he was pacing himself, deliberately spreading his energy out to maximize his performance over the whole distance. This is why the psychology and physiology of endurance are inextricably linked: any task lasting longer than a dozen or so seconds requires decisions, whether conscious or unconscious, on how hard to push and when. Even in repeated all-out weightlifting efforts—brief five-second pulls that you’d think would be a pure measure of muscular force—studies have found that we can’t avoid pacing ourselves: your “maximum” force depends on how many reps you think you have left.
This inescapable importance of pacing is why endurance athletes are obsessed with their splits. As John L. Parker Jr. wrote in his cult running classic, Once a Runner, “A runner is a miser, spending the pennies of his energy with great stinginess, constantly wanting to know how much he has spent and how much longer he will be expected to pay. He wants to be broke at precisely the moment he no longer needs his coin.” In my race in Sherbrooke, I knew I needed to run each 200-meter lap in just under 32 seconds in order to break four minutes, and I had spent countless training hours learning the feel of this exact pace. So it was a shock, an eye-widening physical jolt to my system, to hear the timekeeper call out, as I completed my first circuit of the track, “Twenty-seven!”
The science of how we pace ourselves turns out to be surprisingly complex (as we’ll see in later chapters). You judge what’s sustainable based not only on how you feel, but on how that feeling compares to how you expected to feel at that point in the race. As I started my second lap, I had to reconcile two conflicting inputs: the intellectual knowledge that I had set off at a recklessly fast pace, and the subjective sense that I felt surprisingly, exhilaratingly good.I fought off the panicked urge to slow down, and came through the second lap in 57 seconds—and still felt good. Now I knew for sure that something special was happening.
As the race proceeded, I stopped paying attention to the split times. They were so far ahead of the 4:00 schedule I’d memorized that they no longer conveyed any useful information. I simply ran, hoping to reach the finish before the gravitational pull of reality reasserted its grip on my legs. I crossed the line in 3 minutes, 52.7 seconds, a personal best by a full nine seconds. In that one race, I’d improved more than my cumulative improvement since my first season of running, five years earlier. Poring through my training logs—as I did that night, and have many times since—revealed no hint of the breakthrough to come. My workouts suggested, at most, incremental gains compared to previous years.
After the race, I debriefed with a teammate who had timed my lap splits for me. His watch told a very different story of the race. My first lap had taken 30 seconds, not 27; my second lap was 60, not 57. Perhaps the lap counter calling the splits at the finish had started his watch three seconds late; or perhaps his effort to translate on the fly from French to English for my benefit had resulted in a delay of a few seconds. Either way, he’d misled me into believing that I was running faster than I really was, while feeling unaccountably good. As a result, I’d unshackled myself from my pre-race expectations and run a race nobody could have predicted.
After Roger Bannister came the deluge—at least, that’s how the story is often told. Typical of the genre is The Winning Mind Set, a 2006 self-help book by Jim Brault and Kevin Seaman, which uses Bannister’s four-minute mile as a parable about the importance of self-belief. “[W]ithin one year, 37 others did the same thing,” they write. “In the year after that, over 300 runners ran a mile in less than four minutes.” Similar larger-than-life (that is, utterly fictitious) claims are a staple in motivational seminars and across the Web: once Bannister showed the way, others suddenly brushed away their mental barriers and unlocked their true potential.
As interest in the prospects of a sub-two-hour marathon heats up, this narrative crops up frequently as evidence that the new challenge, too, is primarily psychological. Skeptics, meanwhile, assert that belief has nothing to do with it—that humans, in their current form, are simply incapable of running that fast for that long. The debate, like its predecessor six decades ago, offers a compelling real-world test bed for exploring the various theories about endurance and human limits that scientists are currently investigating. But to draw any meaningful conclusions, it’s important to get the facts right. For one thing, Landy was the only other person to join the sub-four club within a year of Bannister’s run, and just four others followed the next year. It wasn’t until 1979, more than twenty years later, that Spanish star Jos? Luis Gonz?lez became the three hundredth man to break the barrier.
And there’s more to Landy’s sudden breakthrough, after being stuck for so many races, than simple mind over muscle. His six near-misses all came at low-key meets in Australia where competition was sparse and weather often unfavorable. He finally embarked on the long voyage to Europe, where tracks were fast and competition plentiful, in the spring of 1954—only to discover, just three days after he arrived, that Bannister had already beaten him to the goal. In Helsinki, he had a pacer for the first time, a local runner who led the first lap and a half at a brisk pace. And more important, he had real competition: Chris Chataway, one of the two men who had paced Bannister’s sub-four run, was nipping at Landy’s heels until partway through the final lap. It’s not hard to believe that Landy would have broken four that day even if Roger Bannister had never existed.
Still, I can’t entirely dismiss the mind’s role—in no small part because of what happened in the wake of my own breakthrough. In my next attempt at the distance after Sherbrooke, I ran 3:49. In the race after that, I crossed the line, as confused as I was exhilarated, in 3:44, qualifying me for that summer’s Olympic Trials. In the space of three races, I’d somehow been transformed. The TV coverage of the 1996 trials is on YouTube, and as the camera lingers on me before the start of the 1,500 final (I’m lined up next to Graham Hood, the Canadian record-holder at the time), you can see that I’m still not quite sure how I got there. My eyes keep darting around in panic, as if I expect to glance down and discover that I’m still in my pajamas.
I spent a lot of time over the next decade chasing further breakthroughs, with decidedly mixed results. Knowing (or believing) that your ultimate limits are all in your head doesn’t make them any less real in the heat of a race. And it doesn’t mean you can simply decide to change them. If anything, my head held me back as often as it pushed me forward during those years, to my frustration and befuddlement. “It should be mathematical,” is how U.S. Olympic runner Ian Dobson described the struggle to understand the ups and downs of his own performances, “but it’s not.” I, too, kept searching for the formula—the one that would allow me to calculate, once and for all, my limits. If I knew that I had run as fast as my body was capable of, I reasoned, I’d be able to walk away from the sport with no regrets.
At twenty-eight, after an ill-timed stress fracture in my sacrum three months before the 2004 Olympic Trials, I finally decided to move on. I returned to school for a journalism degree, and then started out as a general assignment reporter with a newspaper in Ottawa. But I found myself drawn back to the same lingering questions. Why wasn’t it mathematical? What held me back from breaking four for so long, and what changed when I did? I left the newspaper and started writing as a freelancer about endurance sports—not so much about who won and who lost, but about why. I dug into the scientific literature and discovered that there was a vigorous (and sometimes rancorous) ongoing debate about those very questions.
Physiologists spent most of the twentieth century on an epic quest to understand how our bodies fatigue. They cut the hind legs off frogs and jolted the severed muscles with electricity until they stopped twitching; lugged cumbersome lab equipment on expeditions to remote Andean peaks; and pushed thousands of volunteers to exhaustion on treadmills, in heat chambers, and on every drug you can think of. What emerged was a mechanistic—almost mathematical—view of human limits: like a car with a brick on its gas pedal, you go until the tank runs out of gas or the radiator boils over, then you stop.
But that’s not the whole picture. With the rise of sophisticated techniques to measure and manipulate the brain, researchers are finally getting a glimpse of what’s happening in our neurons and synapses when we’re pushed to our limits. It turns out that, whether it’s heat or cold, hunger or thirst, or muscles screaming with the supposed poison of “lactic acid,” what matters in many cases is how the brain interprets these distress signals. With new understanding of the brain’s role come new—and sometimes worrisome—opportunities. At its Santa Monica, California, headquarters, Red Bull has experimented with transcranial direct-current stimulation, applying a jolt of electricity through electrodes to the brains of elite triathletes and cyclists, seeking a competitive edge. The British military has funded studies of computer-based brain training protocols to enhance the endurance of its troops, with startling results. And even subliminal messages can help or hurt your endurance: a picture of a smiling face, flashed in 16-millisecond bursts, boosts cycling performance by 12 percent compared to frowning faces.
Over the past decade, I’ve traveled to labs in Europe, South Africa, Australia, and across North America, and spoken to hundreds of scientists, coaches, and athletes who share my obsession with decoding the mysteries of endurance. I started out with the hunch that the brain would play a bigger role than generally acknowledged. That turned out to be true, but not in the simple it’s-all-in-your-head manner of self-help books. Instead, brain and body are fundamentally intertwined, and to understand what defines your limits under any particular set of circumstances, you have to consider them both together. That’s what the scientists described in the following pages have been doing, and the surprising results of their research suggest to me that, when it comes to pushing our limits, we’re just getting started.
After fifty-six days of hard skiing, Henry Worsley glanced down at the digital display of his GPS and stopped. “That’s it,” he announced with a grin, driving a ski pole into the wind-packed snow. “We’ve made it!” It was early evening on January 9, 2009, one hundred years to the day since British explorer Ernest Shackleton had planted a Union Jack in the name of King Edward VII at this precise location on the Antarctic plateau: 88 degrees and 23 minutes south, 162 degrees east. In 1909, it was the farthest south any human had ever traveled, just 112 miles from the South Pole. Worsley, a gruff veteran of the British Special Air Service who had long idolized Shackleton, cried “small tears of relief and joy” behind his goggles, for the first time since he was ten years old. (“My poor physical state accentuated my vulnerability,” he later explained.) Then he and his companions, Will Gow and Henry Adams, unfurled their tent and fired up the kettle. It was ?35 degrees Celsius.
For Shackleton, 88°23' south was a bitter disappointment. Six years earlier, as a member of Robert Falcon Scott’s Discovery expedition, he’d been part of a three-man team that set a farthest-south record of 82°17'. But he had been sent home in disgrace after Scott claimed that his physical weakness had held the others back. Shackleton returned for the 1908–09 expedition eager to vindicate himself by beating his former mentor to the pole, but his own four-man inland push was a struggle from the start. By the time Socks, the team’s fourth and final Manchurian pony, disappeared into a crevasse on the Beardmore glacier six weeks into the march, they were already on reduced rations and increasingly unlikely to reach their goal. Still, Shackleton decided to push onward as far as possible. Finally, on January 9, he acknowledged the inevitable: “We have shot our bolt,” he wrote in his diary. “Homeward bound at last. Whatever regrets may be, we have done our best.”
To Worsley, a century later, that moment epitomized Shackleton’s worth as a leader: “The decision to turn back,” he argued, “must be one of the greatest decisions taken in the whole annals of exploration.” Worsley was a descendant of the skipper of Shackleton’s ship in the Endurance expedition; Gow was Shackleton’s great-nephew by marriage; and Adams was the great-grandson of Shackleton’s second in command on the 1909 trek. The three of them had decided to honor their forebears by retracing the 820-mile route without any outside help. They would then take care of unfinished ancestral business by continuing the last 112 miles to the South Pole, where they would be picked up by a Twin Otter and flown home. Shackleton, in contrast, had to turn around and walk the 820 miles back to his base camp—a return journey that, like most in the great age of exploration, turned into a desperate race against death.
What were the limits that stalked Shackleton? It wasn’t just beard-freezingly cold; he and his men also climbed more than 10,000 feet above sea level, meaning that each icy breath provided only two-thirds as much oxygen as their bodies expected. With the early demise of their ponies, they were man-hauling sleds that had initially weighed as much as 500 pounds, putting continuous strain on their muscles. Studies of modern polar travelers suggest they were burning somewhere between 6,000 and 10,000 calories per day—and doing it on half rations. By the end of their journey, they would have consumed close to a million calories over the course of four relentless months, similar to the totals of the subsequent Scott expedition of 1911–12. South African scientist Tim Noakes argues these two expeditions were “the greatest human performances of sustained physical endurance of all time.”
Shackelton’s understanding of these various factors was limited. He knew that he and his men needed to eat, of course, but beyond that the inner workings of the human body remained shrouded in mystery. That was about to change, though. A few months before Shackleton’s ship, the Nimrod, sailed toward Antarctica from the Isle of Wight in August 1907, researchers at the University of Cambridge published an account of their research on lactic acid, an apparent enemy of muscular endurance that would become intimately familiar to generations of athletes. While the modern view of lactic acid has changed dramatically in the century since then (for starters, what’s found inside the body is actually lactate, a negatively charged ion, rather than lactic acid), the paper marked the beginning of a new era of investigation into human endurance—because if you understand how a machine works, you can calculate its ultimate limits.
The nineteenth-century Swedish chemist J?ns Jacob Berzelius is now best remembered for devising the modern system of chemical notation—H2O and CO2 and so on—but he was also the first, in 1807, to draw the connection between muscle fatigue and a recently discovered substance found in soured milk. Berzelius noticed that the muscles of hunted stags seemed to contain high levels of this “lactic” acid, and that the amount of acid depended on how close to exhaustion the animal had been driven before its death. (To be fair to Berzelius, chemists were still almost a century away from figuring out what “acids” really were. We now know that lactate from muscle and blood, once extracted from the body, combines with protons to produce lactic acid. That’s what Berzelius and his successors measured, which is why they believed that it was lactic acid rather than lactate that played a role in fatigue. For the remainder of the book, we’ll refer to lactate except in historical contexts.)
What the presence of lactic acid in the stags’ muscles signified was unclear, given how little anyone knew about how muscles worked. At the time, Berzelius himself subscribed to the idea of a “vital force” that powered living things and existed outside the realm of ordinary chemistry. But vitalism was gradually being supplanted by “mechanism,” the idea that the human body is basically a machine, albeit a highly complex one, obeying the same basic laws as pendulums and steam engines. A series of nineteenth-century experiments, often crude and sometimes bordering on comical, began to offer hints about what might power this machine. In 1865, for example, a pair of German scientists collected their own urine while hiking up the Faulhorn, an 8,000-foot peak in the Bernese Alps, then measured its nitrogen content to establish that protein alone couldn’t supply all the energy needed for prolonged exertion. As such findings accumulated, they bolstered the once-heretical view that human limits are, in the end, a simple matter of chemistry and math.
These days, athletes can test their lactate levels with a quick pinprick during training sessions (and some companies now claim to be able to measure lactate in real time with sweat-analyzing adhesive patches). But even confirming the presence of lactic acid was a formidable challenge for early investigators; Berzelius, in his 1808 book, F?rel?sningar i Djurkemien (“Lectures in Animal Chemistry”), devotes six dense pages to his recipe for chopping fresh meat, squeezing it in a strong linen bag, cooking the extruded liquid, evaporating it, and subjecting it to various chemical reactions until, having precipitated out the dissolved lead and alcohols, you’re left with a “thick brown syrup, and ultimately a lacquer, having all the character of lactic acid.”
Not surprisingly, subsequent attempts to follow this sort of procedure produced a jumble of ambiguous results that left everyone confused. That was still the situation in 1907, when Cambridge physiologists Frederick Hopkins and Walter Fletcher took on the problem. “[I]t is notorious,” they wrote in the introduction to their paper, “that … there is hardly any important fact concerning the lactic acid formation in muscle which, advanced by one observer, has not been contradicted by some other.” Hopkins was a meticulous experimentalist who went on to acclaim as the codiscoverer of vitamins, for which he won a Nobel Prize; Fletcher was an accomplished runner who, as a student in the 1890s, was among the first to complete the 320-meter circuit around the courtyard of Cambridge’s Trinity College while its ancient clock was striking twelve—a challenge famously immortalized in the movie Chariots of Fire (though Fletcher reportedly cut the corners).
Hopkins and Fletcher plunged the muscles they wanted to test into cold alcohol immediately after finishing whatever tests they wished to perform. This crucial advance kept levels of lactic acid more or less constant during the subsequent processing stages, which still involved grinding up the muscle with a mortar and pestle and then measuring its acidity. Using this newly accurate technique, the two men investigated muscle fatigue by experimenting on frog legs hung in long chains of ten to fifteen pairs connected by zinc hooks. By applying electric current at one end of the chain, they could make all the legs contract at once; after two hours of intermittent contractions, the muscles would be totally exhausted and unable to produce even a feeble twitch.
The results were clear: exhausted muscles contained three times as much lactic acid as rested ones, seemingly confirming Berzelius’s suspicion that it was a by-product—or perhaps even a cause—of fatigue. And there was an additional twist: the amount of lactic acid decreased when the fatigued frog muscles were stored in oxygen, but increased when they were deprived of oxygen. At last, a recognizably modern picture of how muscles fatigue was coming into focus—and from this point on, new findings started to pile up rapidly.
ñêà÷àòü êíèãó áåñïëàòíî