banner banner banner
The Upside of Irrationality: The Unexpected Benefits of Defying Logic at Work and at Home
The Upside of Irrationality: The Unexpected Benefits of Defying Logic at Work and at Home
Оценить:
Рейтинг: 0

Полная версия:

The Upside of Irrationality: The Unexpected Benefits of Defying Logic at Work and at Home

скачать книгу бесплатно

The Upside of Irrationality: The Unexpected Benefits of Defying Logic at Work and at Home
Dan Ariely

Behavioral economist and New York Times bestselling author of Predictably Irrational Dan Ariely returns to offer a much-needed take on the irrational decisions that influence our dating lives, our workplace experiences, and our general behaviour, up close and personal.In The Upside of Irrationality, behavioral economist Dan Ariely will explore the many ways in which our behaviour often leads us astray in terms of our romantic relationships, our experiences in the workplace, and our temptations to cheat. Blending everyday experience with groundbreaking research, Ariely explains how expectations, emotions, social norms and other invisible, seemingly illogical forces skew our reasoning abilities.Among the topics Dan explores are:• What we think will make us happy and what really makes us happy;• How we learn to love the ones we are with;• Why online dating doesn’t work, and how we can improve on it;• Why learning more about people make us like them less;• Why large bonuses can make CEOs less productive;• How to really motivate people at work;• Why bad directions can help us;• How we fall in love with our ideas;• How we are motivated by revenge; and• What motivates us to cheat.Drawing on the same experimental methods that made Predictably Irrational such a hit, Dan will emphasize the important role that irrationality plays in our day-to-day decisionmaking—not just in our financial marketplace, but in the most hidden aspects of our lives.

The Upside of Irrationality

The Unexpected Benefits of Defying Logic at Work and at Home

International Bestselling author

Dan Ariely

Copyright (#ulink_5dfd94b2-257f-500d-86aa-6c6ac1ed8f5f)

HarperCollinsPublishers

1 London Bridge Street

London SE1 9GF

www.harpercollins.co.uk (http://www.harpercollins.co.uk)

First published by HarperCollinsPublishers 2010

© Dan Ariely 2010

Dan Ariely asserts the moral right to be identified as the author of this work

A catalogue record of this book is available from the British Library

All rights reserved under International and Pan-American Copyright Conventions. By payment of the required fees, you have been granted the non-exclusive, non-transferable right to access and read the text of this e-book on screen. No part of this text may be reproduced, transmitted, downloaded, decompiled, reverse engineered, or stored in or introduced into any information storage and retrieval system, in any form or by any means, whether electronic or mechanical, now known or hereinafter invented, without the express written permission of HarperCollins e-books.

Ebook Edition © MAY 2010 ISBN: 9780007354795

Version 2016-11-25

Find out more about Harpercollins and the environment at www.harpercollins.co.uk/green

To my teachers, collaborators, and students, for making research fun and exciting.

And to all the participants who took part in our experiments over the years—you are the engine of this research, and I am deeply grateful for all your help.

Table of Contents

Cover Page (#ulink_ced672de-685b-5318-9457-946eaf5ab27a)

Title Page (#u8e43ccff-a412-5c1e-b2d8-dc4b826909e6)

Copyright (#u3f07580b-d949-52ea-b77b-2ceb302e306e)

Dedication (#u6dea8bee-b198-5b0c-8b0c-ce87503b8474)

INTRODUCTION Lessons from Procrastination and Medical Side Effects (#u616718e3-ac5e-52c2-a8a1-3d982de0a7b9)

Part I THE UNEXPECTED WAYS WE DEFY LOGIC AT WORK (#u9289bfb0-f8d9-5896-a788-72152af096ba)

CHAPTER 1 Paying More for Less (#u38ab0a4e-aefc-5df9-933d-0dff44283eea)

CHAPTER 2 The Meaning of Labor (#ub7da7b89-79c2-5078-b02d-2379578f9a91)

CHAPTER 3 The IKEA Effect (#litres_trial_promo)

CHAPTER 4 The Not-Invented-Here Bias (#litres_trial_promo)

CHAPTER 5 The Case for Revenge (#litres_trial_promo)

Part II THE UNEXPECTED WAYS WE DEFY LOGIC AT HOME (#litres_trial_promo)

CHAPTER 6 On Adaptation (#litres_trial_promo)

CHAPTER 7 Hot or Not? (#litres_trial_promo)

CHAPTER 8 When a Market Fails (#litres_trial_promo)

CHAPTER 9 On Empathy and Emotion (#litres_trial_promo)

CHAPTER 10 The Long-Term Effects of Short-Term Emotions (#litres_trial_promo)

CHAPTER 11 Lessons from Our Irrationalities (#litres_trial_promo)

Thanks (#litres_trial_promo)

List of Collaborators (#litres_trial_promo)

Notes (#litres_trial_promo)

Bibliography and Additional Readings (#litres_trial_promo)

Index (#litres_trial_promo)

Also by Dan Ariely (#litres_trial_promo)

About the Publisher (#litres_trial_promo)

INTRODUCTION Lessons from Procrastination and Medical Side Effects (#ulink_45f90585-147a-5fd3-8482-f74517a55559)

I don’t know about you, but I have never met anyone who never procrastinates. Delaying annoying tasks is a nearly universal problem—one that is incredibly hard to curb, no matter how hard we try to exert our willpower and self-control or how many times we resolve to reform.

Allow me to share a personal story about one way I learned to deal with my own tendency to procrastinate. Many years ago I experienced a devastating accident. A large magnesium flare exploded next to me and left 70 percent of my body covered with third-degree burns (an experience I wrote about in Predictably Irrational

(#litres_trial_promo)). As if to add insult to injury, I acquired hepatitis from an infected blood transfusion after three weeks in the hospital. Obviously, there is never a good time to get a virulent liver disease, but the timing of its onset was particularly unfortunate because I was already in such bad shape. The disease increased the risk of complications, delayed my treatment, and caused my body to reject many skin transplants. To make matters worse, the doctors didn’t know what type of liver disease I had. They knew I wasn’t suffering from hepatitis A or B, but they couldn’t identify the strain. After a while the illness subsided, but it still slowed my recovery by flaring up from time to time and wreaking havoc on my system.

Eight years later, when I was in graduate school, a flare-up hit me hard. I checked into the student health center, and after many blood tests the doctor gave me a diagnosis: it was hepatitis C, which had recently been isolated and identified. As lousy as I felt, I greeted this as good news. First, I finally knew what I had; second, a promising new experimental drug called interferon looked as if it might be an effective treatment for hepatitis C. The doctor asked whether I’d consider being part of an experimental study to test the efficacy of interferon. Given the threats of liver fibrosis and cirrhosis and the possibility of early death, it seemed that being part of the study was clearly the preferred path.

The initial protocol called for self-injections of interferon three times a week. The doctors told me that after each injection I would experience flulike symptoms including fever, nausea, headaches, and vomiting—warnings that I soon discovered to be perfectly accurate. But I was determined to kick the disease, so every Monday, Wednesday, and Friday evening over the next year and a half, I carried out the following ritual: Once I got home, I would take a needle from the medicine cabinet, open the refrigerator, load the syringe with the right dosage of interferon, plunge the needle deep into my thigh, and inject the medication. Then I would lie down in a big hammock—the only interesting piece of furniture in my loftlike student apartment—from which I had a perfect view of the television. I kept a bucket within reach to catch the vomit that would inevitably come and a blanket to fend off the shivering. About an hour later the nausea, shivering, and headache would set in, and at some point I would fall asleep. By noon the next day I would have more or less recovered and would return to my classwork and research.

Along with the other patients in the study, I wrestled not only with feeling sick much of the time, but also with the basic problem of procrastination and self-control. Every injection day was miserable. I had to face the prospect of giving myself a shot followed by a sixteen-hour bout of sickness in the hope that the treatment would cure me in the long run. I had to endure what psychologists call a “negative immediate effect” for the sake of a “positive long-term effect.” This is the type of problem we all experience when we fail to do short-term tasks that will be good for us down the road. Despite the prodding of conscience, we often would rather avoid doing something unpleasant now (exercising, working on an annoying project, cleaning out the garage) for the sake of a better future (being healthier, getting a job promotion, earning the gratitude of one’s spouse).

At the end of the eighteen-month trial, the doctors told me that the treatment was successful and that I was the only patient in the protocol who had always taken the interferon as prescribed. Everyone else in the study had skipped the medication numerous times—hardly surprising, given the unpleasantness involved. (Lack of medical compliance is, in fact, a very common problem.)

So how did I get through those months of torture? Did I simply have nerves of steel? Like every person who walks the earth, I have plenty of self-control problems and, every injection day, I deeply wanted to avoid the procedure. But I did have a trick for making the treatment more bearable. For me, the key was movies. I love movies and, if I had the time, I would watch one every day. When the doctors told me what to expect, I decided to motivate myself with movies. Besides, I couldn’t do much else anyway, thanks to the side effects.

Every injection day, I would stop at the video store on the way to school and pick up a few films that I wanted to see. Throughout the day, I would think about how much I would enjoy watching them later. Once I got home, I would give myself the injection. Then I would immediately jump into my hammock, make myself comfortable, and start my mini film fest. That way, I learned to associate the act of the injection with the rewarding experience of watching a wonderful movie. Eventually, the negative side effects kicked in, and I didn’t have such a positive feeling. Still, planning my evenings that way helped me associate the injection more closely with the fun of watching a movie than with the discomfort of the side effects, and thus I was able to continue the treatment. (I was also fortunate, in this instance, that I have a relatively poor memory, which meant that I could watch some of the same movies over and over again.)

THE MORAL OF this story? All of us have important tasks that we would rather avoid, particularly when the weather outside is inviting. We all hate grinding through receipts while doing our taxes, cleaning up the backyard, sticking to a diet, saving for retirement, or, like me, undergoing an unpleasant treatment or therapy. Of course, in a perfectly rational world, procrastination would never be a problem. We would simply compute the values of our long-term objectives, compare them to our short-term enjoyments, and understand that we have more to gain in the long term by suffering a bit in the short term. If we were able to do this, we could keep a firm focus on what really matters to us. We would do our work while keeping in mind the satisfaction we’d feel when we finished our project. We would tighten our belts a notch and enjoy our improved health down the line. We would take our medications on time and hope to hear the doctor say one day, “There isn’t a trace of the disease in your system.”

Sadly, most of us often prefer immediately gratifying short-term experiences over our long-term objectives.

(#litres_trial_promo) We routinely behave as if sometime in the future, we will have more time, more money, and feel less tired or stressed. “Later” seems like a rosy time to do all the unpleasant things in life, even if putting them off means eventually having to grapple with a much bigger jungle in our yard, a tax penalty, the inability to retire comfortably, or an unsuccessful medical treatment. In the end, we don’t need to look far beyond our own noses to realize how frequently we fail to make short-term sacrifices for the sake of our long-term goals.

WHAT DOES ALL of this have to do with the subject of this book? In a general sense, almost everything.

From a rational perspective, we should make only decisions that are in our best interest (“should” is the operative word here). We should be able to discern among all the options facing us and accurately compute their value—not just in the short term but also in the long term—and choose the option that maximizes our best interests. If we’re faced with a dilemma of any sort, we should be able to see the situation clearly and without prejudice, and we should assess pros and cons as objectively as if we were comparing different types of laptops. If we’re suffering from a disease and there is a promising treatment, we should comply fully with the doctor’s orders. If we are overweight, we should buckle down, walk several miles a day, and live on broiled fish, vegetables, and water. If we smoke, we should stop—no ifs, ands, or buts.

Sure, it would be nice if we were more rational and clearheaded about our “should”s. Unfortunately, we’re not. How else do you explain why millions of gym memberships go unused or why people risk their own and others’ lives to write a text message while they’re driving or why…(put your favorite example here)?

THIS IS WHERE behavioral economics enters the picture. In this field, we don’t assume that people are perfectly sensible, calculating machines. Instead, we observe how people actually behave, and quite often our observations lead us to the conclusion that human beings are irrational.

To be sure, there is a great deal to be learned from rational economics, but some of its assumptions—that people always make the best decisions, that mistakes are less likely when the decisions involve a lot of money, and that the market is self-correcting—can clearly lead to disastrous consequences.

To get a clearer idea of how dangerous it can be to assume perfect rationality, think about driving. Transportation, like the financial markets, is a man-made system, and we don’t need to look very far to see other people making terrible and costly mistakes (due to another aspect of our biased worldview, it takes a bit more effort to see our own errors). Car manufacturers and road designers generally understand that people don’t always exercise good judgment while driving, so they build vehicles and roads with an eye to preserving drivers’ and passengers’ safety. Automobile designers and engineers try to compensate for our limited human ability by installing seat belts, antilock brakes, rearview mirrors, air bags, halogen lights, distance sensors, and more. Similarly, road designers put safety margins along the edge of the highway, some festooned with cuts that make a brrrrrr sound when you drive on them. But despite all these safety precautions, human beings persist in making all kinds of errors while driving (including drinking and texting), suffering accidents, injuries, and even death as a result.

Now think about the implosion of Wall Street in 2008 and its attendant impact on the economy. Given our human foibles, why on earth would we think we don’t need to take any external measures to try to prevent or deal with systematic errors of judgment in the man-made financial markets? Why not create safety measures to help keep someone who is managing billions of dollars, and leveraging this investment, from making incredibly expensive mistakes?

EXACERBATING THE BASIC problem of human error are technological developments that are, in principle, very useful but that can also make it more difficult for us to behave in a way that truly maximizes our interests. Consider the cell phone, for example. It’s a handy gadget that lets you not only call but also text and e-mail your friends. If you text while walking, you might look at your phone instead of the sidewalk and risk running into a pole or another person. This would be embarrassing but hardly fatal. Allowing your attention to drift while walking is not so bad; but add a car to the equation, and you have a recipe for disaster.

Likewise, think about how technological developments in agriculture have contributed to the obesity epidemic. Thousands of years ago, as we burned calories hunting and foraging on the plains and in the jungles, we needed to store every possible ounce of energy. Every time we found food containing fat or sugar, we stopped and consumed as much of it as we could. Moreover, nature gave us a handy internal mechanism: a lag of about twenty minutes between the time when we’d actually consumed enough calories and the time when we felt we had enough to eat. That allowed us to build up a little fat, which came in handy if we later failed to bring down a deer.

Now jump forward a few thousand years. In industrialized countries, we spend most of our waking time sitting in chairs and staring at screens rather than chasing after animals. Instead of planting, tending, and harvesting corn and soy ourselves, we have commercial agriculture do it for us. Food producers turn the corn into sugary, fattening stuff, which we then buy from fast-food restaurants and supermarkets. In this Dunkin’ Donuts world, our love of sugar and fat allows us to quickly consume thousands of calories. And after we have scarfed down a bacon, egg, and cheese breakfast bagel, the twenty-minute lag time between having eaten enough and realizing that we’re stuffed allows us to add even more calories in the form of a sweetened coffee drink and a half-dozen powdered-sugar donut holes.

Essentially, the mechanisms we developed during our early evolutionary years might have made perfect sense in our distant past. But given the mismatch between the speed of technological development and human evolution, the same instincts and abilities that once helped us now often stand in our way. Bad decision-making behaviors that manifested themselves as mere nuisances in earlier centuries can now severely affect our lives in crucial ways.

When the designers of modern technologies don’t understand our fallibility, they design new and improved systems for stock markets, insurance, education, agriculture, or health care that don’t take our limitations into account (I like the term “human-incompatible technologies,” and they are everywhere). As a consequence, we inevitably end up making mistakes and sometimes fail magnificently.

THIS PERSPECTIVE OF human nature may seem a bit depressing on the surface, but it doesn’t have to be. Behavioral economists want to understand human frailty and to find more compassionate, realistic, and effective ways for people to avoid temptation, exert more self-control, and ultimately reach their long-term goals. As a society, it’s extremely beneficial to understand how and when we fail and to design/ invent/create new ways to overcome our mistakes. As we gain some understanding about what really drives our behaviors and what steers us astray—from business decisions about bonuses and motivation to the most personal aspects of life such as dating and happiness—we can gain control over our money, relationships, resources, safety, and health, both as individuals and as a society.

This is the real goal of behavioral economics: to try to understand the way we really operate so that we can more readily observe our biases, be more aware of their influences on us, and hopefully make better decisions. Although I can’t imagine that we will ever become perfect decision makers, I do believe that an improved understanding of the multiple irrational forces that influence us could be a useful first step toward making better decisions. And we don’t have to stop there. Inventors, companies, and policy makers can take the additional steps to redesign our working and living environments in ways that are naturally more compatible with what we can and cannot do.

In the end, this is what behavioral economics is about—figuring out the hidden forces that shape our decisions, across many different domains, and finding solutions to common problems that affect our personal, business, and public lives.

AS YOU WILL see in the pages ahead, each chapter in this book is based on experiments I carried out over the years with some terrific colleagues (at the end of the book, I have included short biographies of my wonderful collaborators). In each of these chapters, I’ve tried to shed some light on a few of the biases that plague our decisions across many different domains, from the workplace to personal happiness.

Why, you may ask, do my colleagues and I put so much time, money, and energy into experiments? For social scientists, experiments are like microscopes or strobe lights, magnifying and illuminating the complex, multiple forces that simultaneously exert their influences on us. They help us slow human behavior to a frame-by-frame narration of events, isolate individual forces, and examine them carefully and in more detail. They let us test directly and unambiguously what makes human beings tick and provide a deeper understanding of the features and nuances of our own biases.

(#litres_trial_promo)

There is one other point I want to emphasize: if the lessons learned in any experiment were limited to the constrained environment of that particular study, their value would be limited. Instead, I invite you to think about experiments as an illustration of general principles, providing insight into how we think and how we make decisions in life’s various situations. My hope is that once you understand the way our human nature truly operates, you can decide how to apply that knowledge to your professional and personal life.

In each chapter I have also tried to extrapolate some possible implications for life, business, and public policy—focusing on what we can do to overcome our irrational blind spots. Of course, the implications I have sketched are only partial. To get real value from this book and from social science in general, it is important that you, the reader, spend some time thinking about how the principles of human behavior apply to your life and consider what you might do differently, given your new understanding of human nature. That is where the real adventure lies.

READERS FAMILIAR WITH Predictably Irrational might want to know how this book differs from its predecessor. In Predictably Irrational, we examined a number of biases that lead us—particularly as consumers—into making unwise decisions. The book you hold in your hands is different in three ways.

First—and most obviously—this book differs in its title. Like its predecessor, it’s based on experiments that examine how we make decisions, but its take on irrationality is somewhat different. In most cases, the word “irrationality” has a negative connotation, implying anything from mistakenness to madness. If we were in charge of designing human beings, we would probably work as hard as we could to leave irrationality out of the formula; in Predictably Irrational, I explored the downside of our human biases. But there is a flip side to irrationality, one that is actually quite positive. Sometimes we are fortunate in our irrational abilities because, among other things, they allow us to adapt to new environments, trust other people, enjoy expending effort, and love our kids. These kinds of forces are part and parcel of our wonderful, surprising, innate—albeit irrational—human nature (indeed, people who lack the ability to adapt, trust, or enjoy their work can be very unhappy). These irrational forces help us achieve great things and live well in a social structure. The title The Upside of Irrationality is an attempt to capture the complexity of our irrationalities—the parts that we would rather live without and the parts that we would want to keep if we were the designers of human nature. I believe that it is important to understand both our beneficial and our disadvantageous quirks, because only by doing so can we begin to eliminate the bad and build on the good.

Second, you will notice that this book is divided into two distinct parts. In the first part, we’ll look more closely at our behavior in the world of work, where we spend much of our waking lives. We’ll question our relationships—not just with other people but with our environments and ourselves. What is our relationship with our salaries, our bosses, the things we produce, our ideas, and our feelings when we’ve been wronged? What really motivates us to perform well? What gives us a sense of meaning? Why does the “Not-Invented-Here” bias have such a foothold in the workplace? Why do we react so strongly in the face of injustice and unfairness?

In the second part, we’ll move beyond the world of work to investigate how we behave in our interpersonal relations. What is our relationship to our surroundings and our bodies? How do we relate to the people we meet, those we love, and faraway strangers who need our help? And what is our relationship to our emotions? We’ll examine the ways we adapt to new conditions, environments, and lovers; how the world of online dating works (and doesn’t); what forces dictate our response to human tragedies; and how our reactions to emotions in a given moment can influence patterns of behavior long into the future.

The Upside of Irrationality is also very different from Predictably Irrational because it is highly personal. Though my colleagues and I try to do our best to be as objective as possible in running and analyzing our experiments, much of this book (particularly the second part) draws on some of my difficult experiences as a burn patient. My injury, like all severe injuries, was very traumatic, but it also very quickly shifted my outlook on many aspects of life. My journey provided me with some unique perspectives on human behavior. It presented me with questions that I might not have otherwise considered but, because of my injury, became central to my life and the focus of my research. Far beyond that, and perhaps more important, it led me to study how my own biases work. In describing my personal experiences and biases, I hope to shed some light on the thought process that has led me to my particular interest and viewpoints and illustrate some of the essential ingredients of our common human nature—yours and mine.

AND NOW FOR the journey…

Part I THE UNEXPECTED WAYS WE DEFY LOGIC AT WORK (#ulink_a06d624b-170d-5453-b525-cdbc13e2bd3e)

CHAPTER 1 Paying More for Less (#ulink_bc732c1f-4c76-57fb-b505-4174fdb5776d)

Why Big Bonuses Don’t Always Work

Imagine that you are a plump, happy laboratory rat. One day, a gloved human hand carefully picks you out of the comfy box you call home and places you into a different, less comfy box that contains a maze. Since you are naturally curious, you begin to wander around, whiskers twitching along the way. You quickly notice that some parts of the maze are black and others are white. You follow your nose into a white section. Nothing happens. Then you take a left turn into a black section. As soon as you enter, you feel a very nasty shock surge through your paws.

Every day for a week, you are placed in a different maze. The dangerous and safe places change daily, as do the colors of the walls and the strength of the shocks. Sometimes the sections that deliver a mild shock are colored red. Other times, the parts that deliver a particularly nasty shock are marked by polka dots. Sometimes the safe parts are covered with black-and-white checks. Each day, your job is to learn to navigate the maze by choosing the safest paths and avoiding the shocks (your reward for learning how to safely navigate the maze is that you aren’t shocked). How well do you do?

More than a century ago, psychologists Robert Yerkes and John Dodson

(#litres_trial_promo) performed different versions of this basic experiment in an effort to find out two things about rats: how fast they could learn and, more important, what intensity of electric shocks would motivate them to learn fastest. We could easily assume that as the intensity of the shocks increased, so would the rats’ motivation to learn. When the shocks were very mild, the rats would simply mosey along, unmotivated by the occasional painless jolt. But as the intensity of the shocks and discomfort increased, the scientists thought, the rats would feel as though they were under enemy fire and would therefore be more motivated to learn more quickly. Following this logic we would assume that when the rats really wanted to avoid the most intense shocks, they would learn the fastest.

We are usually quick to assume that there is a link between the magnitude of the incentive and the ability to perform better. It seems reasonable that the more motivated we are to achieve something, the harder we will work to reach our goal, and that this increased effort will ultimately move us closer to our objective. This, after all, is part of the rationale behind paying stockbrokers and CEOs sky-high bonuses: offer people a very large bonus, and they will be motivated to work and perform at very high levels.

SOMETIMES OUR INTUITIONS about the links between motivation and performance (and, more generally, our behavior) are accurate; at other times, reality and intuition just don’t jibe. In Yerkes and Dodson’s case, some of the results aligned with what most of us might expect, while others did not. When the shocks were very weak, the rats were not very motivated, and, as a consequence, they learned slowly. When the shocks were of medium intensity, the rats were more motivated to quickly figure out the rules of the cage, and they learned faster. Up to this point, the results fit with our intuitions about the relationship between motivation and performance.

But here was the catch: when the shock intensity was very high, the rats performed worse! Admittedly, it is difficult to get inside a rat’s mind, but it seemed that when the intensity of the shocks was at its highest, the rats could not focus on anything other than their fear of the shock. Paralyzed by terror, they had trouble remembering which parts of the cage were safe and which were not and, so, were unable to figure out how their environment was structured.

The graph below shows three possible relationships between incentive (payment, shocks) and performance. The light gray line represents a simple relationship, where higher incentives always contribute in the same way to performance. The dashed gray line represents a diminishing-returns relationships between incentives and performance.

The solid dark line represents Yerkes and Dodson’s results. At lower levels of motivation, adding incentives helps to increase performance. But as the level of the base motivation increases, adding incentives can backfire and reduce performance, creating what psychologists often call an “inverse-U relationship.”

Yerkes and Dodson’s experiment should make us wonder about the real relationship between payment, motivation, and performance in the labor market. After all, their experiment clearly showed that incentives can be a double-edged sword. Up to a certain point, they motivate us to learn and perform well. But beyond that point, motivational pressure can be so high that it actually distracts an individual from concentrating on and carrying out a task—an undesirable outcome for anyone.

Of course, electric shocks are not very common incentive mechanisms in the real world, but this kind of relationship between motivation and performance might also apply to other types of motivation: whether the reward is being able to avoid an electrical shock or the financial rewards of making a large amount of money. Let’s imagine how Yerkes and Dodson’s results would look if they had used money instead of shocks (assuming that the rats actually wanted money). At small bonus levels, the rats would not care and not perform very well. At medium bonus levels, the rats would care more and perform better. But, at very high bonus levels, they would be “overmotivated.” They would find it hard to concentrate, and, as a consequence, their performance would be worse than if they were working for a smaller bonus.