скачать книгу бесплатно
MIT’S EAST CAMPUS dormitory is a daunting place. It is home to the hackers, hardware enthusiasts, oddballs, and general misfits (and believe me—it takes a serious misfit to be a misfit at MIT). One hall allows loud music, wild parties, and even public nudity. Another is a magnet for engineering students, whose models of everything from bridges to roller coasters can be found everywhere. (If you ever visit this hall, press the “emergency pizza” button, and a short time later a pizza will be delivered to you.) A third hall is painted completely black. A fourth has bathrooms adorned with murals of various kinds: press the palm tree or the samba dancer, and music, piped in from the hall’s music server (all downloaded legally, of course), comes on.
One afternoon a few years ago, Kim, one of my research assistants, roamed the hallways of East Campus with a laptop tucked under her arm. At each door she asked the students whether they’d like to make some money participating in a quick experiment. When the reply was in the affirmative, Kim entered the room and found (sometimes only with difficulty) an empty spot to place the laptop.
As the program booted up, three doors appeared on the computer screen: one red, the second blue, and the third green. Kim explained that the participants could enter any of the three rooms (red, blue, or green) simply by clicking on the corresponding door. Once they were in a room, each subsequent click would earn them a certain amount of money. If a particular room offered between one cent and 10 cents, for instance, they would make something in that range each time they clicked their mouse in that room. The screen tallied their earnings as they went along.
Getting the most money out of this experiment involved finding the room with the biggest payoff and clicking in it as many times as possible. But this wasn’t trivial. Each time you moved from one room to another, you used up one click (you had a total of 100 clicks). On one hand, switching from one room to another might be a good strategy for finding the biggest payout. On the other hand, running madly from door to door (and room to room) meant that you were burning up clicks which could otherwise have made you money.
Albert, a violin player (and a resident of the Dark Lord Krotus worshippers’ hall), was one of the first participants. He was a competitive type, and determined to make more money than anyone else playing the game. For his first move, he chose the red door and entered the cube-shaped room.
Once inside, he clicked the mouse. It registered 3.5 cents. He clicked again; 4.1 cents; a third click registered one cent. After he sampled a few more of the rewards in this room, his interest shifted to the green door. He clicked the mouse eagerly and went in.
Here he was rewarded with 3.7 cents for his first click; he clicked again and received 5.8 cents; he received 6.5 cents the third time. At the bottom of the screen his earnings began to grow. The green room seemed better than the red room—but what about the blue room? He clicked to go through that last unexplored door. Three clicks fell in the range of four cents. Forget it. He hurried back to the green door (the room paying about five cents a click) and spent the remainder of his 100 clicks there, increasing his payoff. At the end, Albert inquired about his score. Kim smiled as she told him it was one of the best so far.
ALBERT HAD CONFIRMED something that we suspected about human behavior: given a simple setup and a clear goal (in this case, to make money), all of us are quite adept at pursuing the source of our satisfaction. If you were to express this experiment in terms of dating, Albert had essentially sampled one date, tried another, and even had a fling with a third. But after he had tried the rest, he went back to the best—and that’s where he stayed for the remainder of the game.
But to be frank, Albert had it pretty easy. Even while he was running around with other “dates,” the previous ones waited patiently for him to return to their arms. But suppose that the other dates, after a period of neglect, began to turn their backs on him? Suppose that his options began to close down? Would Albert let them go? Or would he try to hang on to all his options for as long as possible? In fact, would he sacrifice some of his guaranteed payoffs for the privilege of keeping these other options alive?
To find out, we changed the game. This time, any door left unvisited for 12 clicks would disappear forever.
SAM, A RESIDENT of the hackers’ hall, was our first participant in the “disappearing” condition. He chose the blue door to begin with; and after entering it, he clicked three times. His earnings began building at the bottom of the screen, but this wasn’t the only activity that caught his eye. With each additional click, the other doors diminished by one-twelfth, signifying that if not attended to, they would vanish. Eight more clicks and they would disappear forever.
Sam wasn’t about to let that happen. Swinging his cursor around, he clicked on the red door, brought it up to its full size, and clicked three times inside the red room. But now he noticed the green door—it was four clicks from disappearing. Once again, he moved his cursor, this time restoring the green door to its full size.
The green door appeared to be delivering the highest payout. So should he stay there? (Remember that each room had a range of payouts. So Sam could not be completely convinced that the green door was actually the best. The blue might have been better, or perhaps the red, or maybe neither.) With a frenzied look in his eye, Sam swung his cursor across the screen. He clicked the red door and watched the blue door continue to shrink. After a few clicks in the red, he jumped over to the blue. But by now the green was beginning to get dangerously small—and so he was back there next.
Before long, Sam was racing from one option to another, his body leaning tensely into the game. In my mind I pictured a typically harried parent, rushing kids from one activity to the next.
Is this an efficient way to live our lives—especially when another door or two is added every week? I can’t tell you the answer for certain in terms of your personal life, but in our experiments we saw clearly that running from pillar to post was not only stressful but uneconomical. In fact, in their frenzy to keep doors from shutting, our participants ended up making substantially less money (about 15 percent less) than the participants who didn’t have to deal with closing doors. The truth is that they could have made more money by picking a room—any room—and merely staying there for the whole experiment! (Think about that in terms of your life or career.)
When Jiwoong and I tilted the experiments against keeping options open, the results were still the same. For instance, we made each click opening a door cost three cents, so that the cost was not just the loss of a click (an opportunity cost) but also a direct financial loss. There was no difference in response from our participants. They still had the same irrational excitement about keeping their options open.
Then we told the participants the exact monetary outcomes they could expect from each room. The results were still the same. They still could not stand to see a door close. Also, we allowed some participants to experience hundreds of practice trials before the actual experiment. Certainly, we thought, they would see the wisdom of not pursuing the closing doors. But we were wrong. Once they saw their options shrinking, our MIT students—supposedly among the best and brightest of young people—could not stay focused. Pecking like barnyard hens at every door, they sought to make more money, and ended up making far less.
In the end, we tried another sort of experiment, one that smacked of reincarnation. In this condition, a door would still disappear if it was not visited within 12 clicks. But it wasn’t gone forever. Rather, a single click could bring it back to life. In other words, you could neglect a door without any loss. Would this keep our participants from clicking on it anyhow? No. To our surprise, they continued to waste their clicks on the “reincarnating” door, even though its disappearance had no real consequences and could always be easily reversed. They just couldn’t tolerate the idea of the loss, and so they did whatever was necessary to prevent their doors from closing.
HOW CAN WE unshackle ourselves from this irrational impulse to chase worthless options? In 1941 the philosopher Erich Fromm wrote a book called Escape from Freedom. In a modern democracy, he said, people are beset not by a lack of opportunity, but by a dizzying abundance of it. In our modern society this is emphatically so. We are continually reminded that we can do anything and be anything we want to be. The problem is in living up to this dream. We must develop ourselves in every way possible; must taste every aspect of life; must make sure that of the 1,000 things to see before dying, we have not stopped at number 999. But then comes a problem—are we spreading ourselves too thin? The temptation Fromm was describing, I believe, is what we saw as we watched our participants racing from one door to another.
Running from door to door is a strange enough human activity. But even stranger is our compulsion to chase after doors of little worth—opportunities that are nearly dead, or that hold little interest for us. My student Dana, for instance, had already concluded that one of her suitors was most likely a lost cause. Then why did she jeopardize her relationship with the other man by continuing to nourish the wilting relationship with the less appealing romantic partner? Similarly, how many times have we bought something on sale not because we really needed it but because by the end of the sale all of those items would be gone, and we could never have it at that price again?
THE OTHER SIDE of this tragedy develops when we fail to realize that some things really are disappearing doors, and need our immediate attention. We may work more hours at our jobs, for instance, without realizing that the childhood of our sons and daughters is slipping away. Sometimes these doors close too slowly for us to see them vanishing. One of my friends told me, for instance, that the single best year of his marriage was when he was living in New York, his wife was living in Boston, and they met only on weekends. Before they had this arrangement—when they lived together in Boston—they would spend their weekends catching up on work rather than enjoying each other. But once the arrangement changed, and they knew that they had only the weekends together, their shared time became limited and had a clear end (the time of the return train). Since it was clear that the clock was ticking, they dedicated the weekends to enjoying each other rather than doing their work.
I’m not advocating giving up work and staying home for the sake of spending all your time with your children, or moving to a different city just to improve your weekends with your spouse (although it might provide some benefits). But wouldn’t it be nice to have a built-in alarm, to warn us when the doors are closing on our most important options?
SO WHAT CAN we do? In our experiments, we proved that running helter-skelter to keep doors from closing is a fool’s game. It will not only wear out our emotions but also wear out our wallets. What we need is to consciously start closing some of our doors. Small doors, of course, are rather easy to close. We can easily strike names off our holiday card lists or omit the tae kwon do from our daughter’s string of activities.
But the bigger doors (or those that seem bigger) are harder to close. Doors that just might lead to a new career or to a better job might be hard to close. Doors that are tied to our dreams are also hard to close. So are relationships with certain people—even if they seem to be going nowhere.
We have an irrational compulsion to keep doors open. It’s just the way we’re wired. But that doesn’t mean we shouldn’t try to close them. Think about a fictional episode: Rhett Butler leaving Scarlett O’Hara in Gone with the Wind, in the scene when Scarlett clings to him and begs him, “Where shall I go? What shall I do?” Rhett, after enduring too much from Scarlett, and finally having his fill of it, says, “Frankly, my dear, I don’t give a damn.” It’s not by chance that this line has been voted the most memorable in cinematographic history. It’s the emphatic closing of a door that gives it widespread appeal. And it should be a reminder to all of us that we have doors—little and big ones—which we ought to shut.
We need to drop out of committees that are a waste of our time and stop sending holiday cards to people who have moved on to other lives and friends. We need to determine whether we really have time to watch basketball and play both golf and squash and keep our family together; perhaps we should put some of these sports behind us. We ought to shut them because they draw energy and commitment away from the doors that should be left open—and because they drive us crazy.
SUPPOSE YOU’VE CLOSED so many of your doors that you have just two left. I wish I could say that your choices are easier now, but often they are not. In fact, choosing between two things that are similarly attractive is one of the most difficult decisions we can make. This is a situation not just of keeping options open for too long, but of being indecisive to the point of paying for our indecision in the end. Let me use the following story to explain.
A hungry donkey approaches a barn one day looking for hay and discovers two haystacks of identical size at the two opposite sides of the barn. The donkey stands in the middle of the barn between the two haystacks, not knowing which to select. Hours go by, but he still can’t make up his mind. Unable to decide, the donkey eventually dies of starvation.* (#litres_trial_promo)
This story is hypothetical, of course, and casts unfair aspersions on the intelligence of donkeys. A better example might be the U.S. Congress. Congress frequently gridlocks itself, not necessarily with regard to the big picture of particular legislation—the restoration of the nation’s aging highways, immigration, improving federal protection of endangered species, etc.—but with regard to the details. Often, to a reasonable person, the party lines on these issues are the equivalent of the two bales of hay. Despite this, or because of it, Congress is frequently left stuck in the middle. Wouldn’t a quick decision have been better for everybody?
Here’s another example. One of my friends spent three months selecting a digital camera from two nearly identical models. When he finally made his selection, I asked him how many photo opportunities had he missed, how much of his valuable time he had spent making the selection, and how much he would have paid to have digital pictures of his family and friends documenting the last three months. More than the price of the camera, he said. Has something like this ever happened to you?
What my friend (and also the donkey and Congress) failed to do when focusing on the similarities and minor differences between two things was to take into account the consequences of not deciding. The donkey failed to consider starving, Congress failed to consider the lives lost while it debated highway legislation, and my friend failed to consider all the great pictures he was missing, not to mention the time he was spending at Best Buy. More important, they all failed to take into consideration the relatively minor differences that would have come with either one of the decisions.
My friend would have been equally happy with either camera; the donkey could have eaten either bale of hay; and the members of Congress could have gone home crowing over their accomplishments, regardless of the slight difference in bills. In other words, they all should have considered the decision an easy one. They could have even flipped a coin (figuratively, in the case of the donkey) and gotten on with their lives. But we don’t act that way, because we just can’t close those doors.
ALTHOUGH CHOOSING BETWEEN two very similar options should be simple, in fact it is not. I fell victim to this very same problem a few years ago, when I was considering whether to stay at MIT or move to Stanford (I chose MIT in the end). Confronted with these two options, I spent several weeks comparing the two schools closely and found that they were about the same in their overall attractiveness to me. So what did I do? At this stage of my problem, I decided I needed some more information and research on the ground. So I carefully examined both schools. I met people at each place and asked them how they liked it. I checked out neighborhoods and possible schools for our kids. Sumi and I pondered how the two options would fit in with the kind of life we wanted for ourselves. Before long, I was getting so engrossed in this decision that my academic research and productivity began to suffer. Ironically, as I searched for the best place to do my work, my research was being neglected.
Since you have probably invested some money to purchase my wisdom in this book (not to mention time, and the other activities you have given up in the process), I should probably not readily admit that I wound up like the donkey, trying to discriminate between two very similar bales of hay. But I did.
In the end, and with all my foreknowledge of the difficulty in this decision-making process, I was just as predictably irrational as everyone else.
Chapter 10
The Effect of Expectations
Why the Mind Gets What It Expects
Suppose you’re a fan of the Philadelphia Eagles and you’re watching a football game with a friend who, sadly, grew up in New York City and is a rabid fan of the Giants. You don’t really understand why you ever became friends, but after spending a semester in the same dorm room you start liking him, even though you think he’s football-challenged.
The Eagles have possession and are down by five points with no time-outs left. It’s the fourth quarter, and six seconds are left on the clock. The ball is on the 12-yard line. Four wide receivers line up for the final play. The center hikes the ball to the quarterback who drops back in the pocket. As the receivers sprint toward the end zone, the quarterback throws a high pass just as the time runs out. An Eagles wide receiver near the corner of the end zone dives for the ball and makes a spectacular catch.
The referee signals a touchdown and all the Eagles players run onto the field in celebration. But wait. Did the receiver get both of his feet in? It looks close on the Jumbotron; so the booth calls down for a review. You turn to your friend: “Look at that! What a great catch! He was totally in. Why are they even reviewing it?” Your friend scowls. “That was completely out! I can’t believe the ref didn’t see it! You must be crazy to think that was in!”
What just happened? Was your friend the Giants fan just experiencing wishful thinking? Was he deceiving himself? Worse, was he lying? Or had his loyalty to his team—and his anticipation of its win—completely, truly, and deeply clouded his judgment?
I was thinking about that one evening, as I strolled through Cambridge and over to MIT’s Walker Memorial Building. How could two friends—two honest guys—see one soaring pass in two different ways? In fact, how could any two parties look at precisely the same event and interpret it as supporting their opposing points of view? How could Democrats and Republicans look at a single schoolchild who is unable to read, and take such bitterly different positions on the same issue? How could a couple embroiled in a fight see the causes of their argument so differently?
A friend of mine who had spent time in Belfast, Ireland, as a foreign correspondent, once described a meeting he had arranged with members of the IRA. During the interview, news came that the governor of the Maze prison, a winding row of cell blocks that held many IRA operatives, had been assassinated. The IRA members standing around my friend, quite understandably, received the news with satisfaction—as a victory for their cause. The British, of course, didn’t see it in those terms at all. The headlines in London the next day boiled with anger and calls for retribution. In fact, the British saw the event as proof that discussions with the IRA would lead nowhere and that the IRA should be crushed. I am an Israeli, and no stranger to such cycles of violence. Violence is not rare. It happens so frequently that we rarely stop to ask ourselves why. Why does it happen? Is it an outcome of history, or race, or politics—or is there something fundamentally irrational in us that encourages conflict, that causes us to look at the same event and, depending on our point of view, see it in totally different terms?
Leonard Lee (a professor at Columbia), Shane Frederick (a professor at MIT), and I didn’t have any answers to these profound questions. But in a search for the root of this human condition, we decided to set up a series of simple experiments to explore how previously held impressions can cloud our point of view. We came up with a simple test—one in which we would not use religion, politics, or even sports as the indicator. We would use glasses of beer.
YOU REACH THE entrance to Walker by climbing a set of broad steps between towering Greek columns. Once inside (and after turning right), you enter two rooms with carpeting that predates the advent of electric light, furniture to match, and a smell that has the unmistaken promise of alcohol, packs of peanuts, and good company. Welcome to the Muddy Charles—one of MIT’s two pubs, and the location for a set of studies that Leonard, Shane, and I would be conducting over the following weeks. The purpose of our experiments would be to determine whether people’s expectations influence their views of subsequent events—more specifically, whether bar patrons’ expectations for a certain kind of beer would shape their perception of its taste.
Let me explain this further. One of the beers that would be served to the patrons of the Muddy Charles would be Budweiser. The second would be what we fondly called MIT Brew. What’s MIT Brew? Basically Budweiser, plus a “secret ingredient”—two drops of balsamic vinegar for each ounce of beer. (Some of the MIT students objected to our calling Budweiser “beer,” so in subsequent studies, we used Sam Adams—a substance more readily acknowledged by Bostonians as “beer.”)
At about seven that evening, Jeffrey, a second-year PhD student in computer science, was lucky enough to drop by the Muddy Charles. “Can I offer you two small, free samples of beer?” asked Leonard, approaching him. Without much hesitation, Jeffrey agreed, and Leonard led him over to a table that held two pitchers of the foamy stuff, one labeled A and the other B. Jeffrey sampled a mouthful of one of them, swishing it around thoughtfully, and then sampled the other. “Which one would you like a large glass of?” asked Leonard. Jeffrey thought it over. With a free glass in the offing, he wanted to be sure he would be spending his near future with the right malty friend.
Jeffrey chose beer B as the clear winner, and joined his friends (who were in deep conversation over the cannon that a group of MIT students had recently “borrowed” from the Caltech campus). Unbeknownst to Jeffrey, the two beers he had previewed were Budweiser and the MIT Brew—and the one he selected was the vinegar-laced MIT Brew.
A few minutes later, Mina, a visiting student from Estonia, dropped in. “Like a free beer?” asked Leonard. Her reply was a smile and a nod of the head. This time, Leonard offered more information. Beer A, he explained, was a standard commercial beer, whereas beer B had been doctored with a few drops of balsamic vinegar. Mina tasted the two beers. After finishing the samples (and wrinkling her nose at the vinegar-laced brew B) she gave the nod to beer A. Leonard poured her a large glass of the commercial brew and Mina happily joined her friends at the pub.
Mina and Jeffrey were only two of hundreds of students who participated in this experiment. But their reaction was typical: without foreknowledge about the vinegar, most of them chose the vinegary MIT Brew. But when they knew in advance that the MIT Brew had been laced with balsamic vinegar, their reaction was completely different. At the first taste of the adulterated suds, they wrinkled their noses and requested the standard beer instead. The moral, as you might expect, is that if you tell people up front that something might be distasteful, the odds are good that they will end up agreeing with you—not because their experience tells them so but because of their expectations.
If, at this point in the book, you are considering the establishment of a new brewing company, especially one that specializes in adding some balsamic vinegar to beer, consider the following points: (1) If people read the label, or knew about the ingredient, they would most likely hate your beer. (2) Balsamic vinegar is actually pretty expensive—so even if it makes beer taste better, it may not be worth the investment. Just brew a better beer instead.
BEER WAS JUST the start of our experiments. The MBA students at MIT’s Sloan School also drink a lot of coffee. So one week, Elie Ofek (a professor at the Harvard Business School), Marco Bertini (a professor at the London Business School), and I opened an impromptu coffee shop, at which we offered students a free cup of coffee if they would answer a few questions about our brew. A line quickly formed. We handed our participants their cups of coffee and then pointed them to a table set with coffee additives—milk, cream, half-and-half, white sugar, and brown sugar. We also set out some unusual condiments—cloves, nutmeg, orange peel, anise, sweet paprika, and cardamom—for our coffee drinkers to add to their cups as they pleased.
After adding what they wanted (and none of our odd condiments were ever used) and tasting the coffee, the participants filled out a survey form. They indicated how much they liked the coffee, whether they would like it served in the cafeteria in the future, and the maximum price they would be willing to pay for this particular brew.
We kept handing out coffee for the next few days, but from time to time we changed the containers in which the odd condiments were displayed. Sometimes we placed them in beautiful glass-and-metal containers, set on a brushed metal tray with small silver spoons and nicely printed labels. At other times we placed the same odd condiments in white Styrofoam cups. The labels were handwritten in a red felt-tip pen. We went further and not only cut the Styrofoam cups shorter, but gave them jagged, hand-cut edges.
What were the results? No, the fancy containers didn’t persuade any of the coffee drinkers to add the odd condiments (I guess we won’t be seeing sweet paprika in coffee anytime soon). But the interesting thing was that when the odd condiments were offered in the fancy containers, the coffee drinkers were much more likely to tell us that they liked the coffee a lot, that they would be willing to pay well for it, and that they would recommend that we should start serving this new blend in the cafeteria. When the coffee ambience looked upscale, in other words, the coffee tasted upscale as well.
WHEN WE BELIEVE beforehand that something will be good, therefore, it generally will be good—and when we think it will be bad, it will bad. But how deep are these influences? Do they just change our beliefs, or do they also change the physiology of the experience itself? In other words, can previous knowledge actually modify the neural activity underlying the taste itself, so that when we expect something to taste good (or bad), it will actually taste that way?
To test this possibility, Leonard, Shane, and I conducted the beer experiments again, but with an important twist. We had already tested our MIT Brew in two ways—by telling our participants about the presence of vinegar in the beer before they tasted the brew, and by not telling them anything at all about it. But suppose we initially didn’t tell them about the vinegar, then had them taste the beer, then revealed the presence of the vinegar, and then asked for their reactions. Would the placement of the knowledge—coming just after the experience—evoke a different response from what we received when the participants got the knowledge before the experience?
For a moment, let’s switch from beer to another example. Suppose you heard that a particular sports car was fantastically exciting to drive, took one for a test drive, and then gave your impressions of the car. Would your impressions be different from those of people who didn’t know anything about the sports car, took the test drive, then heard the car was hot, and then wrote down their impressions? In other words, does it matter if knowledge comes before or after the experience? And if so, which type of input is more important—knowledge before the experience, or an input of information after an experience has taken place?
The significance of this question is that if knowledge merely informs us of a state of affairs, then it shouldn’t matter whether our participants received the information before or after tasting the beer: in other words, if we told them up front that there was vinegar in the beer, this should affect their review of the beer. And if we told them afterward, that should similarly affect their review. After all, they both got the same bad news about the vinegar-laced beer. This is what we should expect if knowledge merely informs us.
On the other hand, if telling our participants about the vinegar at the outset actually reshapes their sensory perceptions to align with this knowledge, then the participants who know about the vinegar up front should have a markedly different opinion of the beer from those who swigged a glass of it, and then were told. Think of it this way. If knowledge actually modifies the taste, then the participants who consumed the beer before they got the news about the vinegar, tasted the beer in the same way as those in the “blind” condition (who knew nothing about the vinegar). They learned about the vinegar only after their taste was established, at which point, if expectations change our experience, it was too late for the knowledge to affect the sensory perceptions.
So, did the students who were told about the vinegar after tasting the beer like it as little as the students who learned about the vinegar before tasting the beer? Or did they like it as much as the students who never learned about the vinegar? What do you think?
As it turned out, the students who found out about the vinegar after drinking the beer liked the beer much better than those who were told about the vinegar up front. In fact, those who were told afterward about the vinegar liked the beer just as much as those who weren’t aware that there was any vinegar in the beer at all.
What does this suggest? Let me give you another example. Suppose Aunt Darcy is having a garage sale, trying to get rid of many things she collected during her long life. A car pulls up, some people get out, and before long they are gathered around one of the oil paintings propped up against the wall. Yes, you agree with them, it does look like a fine example of early American primitivism. But do you tell them that Aunt Darcy copied it from a photograph just a few years earlier?
My inclination, since I am an honest, upright person, would be to tell them. But should you tell them before or after they finish admiring the painting? According to our beer studies, you and Aunt Darcy would be better off keeping the information under wraps until after the examination. I’m not saying that this would entice the visitors to pay thousands of dollars for the painting (even though our beer drinkers preferred our vinegar-laced beer as much when they were told after drinking it as when they were not told at all), but it might get you a higher price for Aunt Darcy’s work.
By the way, we also tried a more extreme version of this experiment. We told one of two groups in advance about the vinegar (the “before” condition) and told the second group about the vinegar after they had finished the sampling (the “after” condition). Once the tasting was done, rather than offer them a large glass of their choice, we instead gave them a large cup of unadulterated beer, some vinegar, a dropper, and the recipe for the MIT Brew (two drops of balsamic vinegar per ounce of beer). We wanted to see if people would freely add balsamic vinegar to their beer; if so, how much they would use; and how these outcomes would depend on whether the participants tasted the beer before or after knowing about the vinegar.
What happened? Telling the participants about the vinegar after rather than before they tasted the beer doubled the number of participants who decided to add vinegar to their beer. For the participants in the “after” condition, the beer with vinegar didn’t taste too bad the first time around (they apparently reasoned), and so they didn’t mind giving it another try.* (#litres_trial_promo)
AS YOU SEE, expectations can influence nearly every aspect of our life. Imagine that you need to hire a caterer for your daughter’s wedding. Josephine’s Catering boasts about its “delicious Asian-style ginger chicken” and its “flavorful Greek salad with kalamata olives and feta cheese.” Another caterer, Culinary Sensations, offers a “succulent organic breast of chicken roasted to perfection and drizzled with a merlot demi-glace, resting in a bed of herbed Israeli couscous” and a “mélange of the freshest roma cherry tomatoes and crisp field greens, paired with a warm circle of chèvre in a fruity raspberry vinagrette.”
Although there is no way to know whether Culinary Sensations’ food is any better than Josephine’s, the sheer depth of the description may lead us to expect greater things from the simple tomato and goat cheese salad. This, accordingly, increases the chance that we (and our guests, if we give them the description of the dish) will rave over it.
This principle, so useful to caterers, is available to everyone. We can add small things that sound exotic and fashionable to our cooking (chipotle-mango sauces seem all the rage right now, or try buffalo instead of beef). These ingredients might not make the dish any better in a blind taste test; but by changing our expectations, they can effectively influence the taste when we have this pre-knowledge.
These techniques are especially useful when you are inviting people for dinner—or persuading children to try new dishes. By the same token, it might help the taste of the meal if you omit the fact that a certain cake is made from a commercial mix or that you used generic rather than brand-name orange juice in a cocktail, or, especially for children, that Jell-O comes from cow hooves. I am not endorsing the morality of such actions, just pointing to the expected outcomes.
Finally, don’t underestimate the power of presentation. There’s a reason that learning to present food artfully on the plate is as important in culinary school as learning to grill and fry. Even when you buy take-out, try removing the Styrofoam packaging and placing the food on some nice dishes and garnishing it (especially if you have company); this can make all the difference.
One more piece of advice: If you want to enhance the experience of your guests, invest in a nice set of wineglasses.
Moreover, if you’re really serious about your wine, you may want to go all out and purchase the glasses that are specific to burgundies, chardonnays, champagne, etc. Each type of glass is supposed to provide the appropriate environment, which should bring out the best in these wines (even though controlled studies find that the shape of the glass makes no difference at all in an objective blind taste test, that doesn’t stop people from perceiving a significant difference when they are handed the “correct glass”). Moreover, if you forget that the shape of the glass really has no effect on the taste of the wine, you yourself may be able to better enjoy the wine you consume in the appropriately shaped fancy glasses.
Expectations, of course, are not limited to food. When you invite people to a movie, you can increase their enjoyment by mentioning that it got great reviews. This is also essential for building the reputation of a brand or product. That’s what marketing is all about—providing information that will heighten someone’s anticipated and real pleasure. But do expectations created by marketing really change our enjoyment?
I’m sure you remember the famous “Pepsi Challenge” ads on television (or at least you may have heard of them). The ads consisted of people chosen at random, tasting Coke and Pepsi and remarking about which they liked better. These ads, created by Pepsi, announced that people preferred Pepsi to Coke. At the same time, the ads for Coke proclaimed that people preferred Coke to Pepsi. How could that be? Were the two companies fudging their statistics?
The answer is in the different ways the two companies evaluated their products. Coke’s market research was said to be based on consumers’ preferences when they could see what they were drinking, including the famous red trademark, while Pepsi ran its challenge using blind tasting and standard plastic cups marked M and Q. Could it be that Pepsi tasted better in a blind taste test but that Coke tasted better in a non-blind (sighted) test?
To better understand the puzzle of Coke versus Pepsi, a terrific group of neuroscientists—Sam McClure, Jian Li, Damon Tomlin, Kim Cypert, Latané Montague, and Read Montague—conducted their own blind and non-blind taste test of Coke and Pepsi. The modern twist on this test was supplied by a functional magnetic resonance imaging (fMRI) machine. With this machine, the researchers could monitor the activity of the participants’ brains while they consumed the drinks.
Tasting drinks while one is in an fMRI is not simple, by the way, because a person whose brain is being scanned must lie perfectly still. To overcome this problem, Sam and his colleagues put a long plastic tube into the mouth of each participant, and from a distance injected the appropriate drink (Pepsi or Coke) through the tube into their mouths. As the participants received a drink, they were also presented with visual information indicating either that Coke was coming, that Pepsi was coming, or that an unknown drink was coming. This way the researchers could observe the brain activation of the participants while they consumed Coke and Pepsi, both when they knew which beverage they were drinking and when they did not.
What were the results? In line with the Coke and Pepsi “challenges,” it turned out that the brain activation of the participants was different depending on whether the name of the drink was revealed or not. This is what happened: Whenever a person received a squirt of Coke or Pepsi, the center of the brain associated with strong feelings of emotional connection—called the ventromedial prefrontal cortex, VMPFC—was stimulated. But when the participants knew they were going to get a squirt of Coke, something additional happened. This time, the frontal area of the brain—the dorsolateral aspect of the prefrontal cortex, DLPFC, an area involved in higher human brain functions like working memory, associations, and higher-order cognitions and ideas—was also activated. It happened with Pepsi—but even more so with Coke (and, naturally, the response was stronger in people who had a stronger preference for Coke).
The reaction of the brain to the basic hedonic value of the drinks (essentially sugar) turned out to be similar for the two drinks. But the advantage of Coke over Pepsi was due to Cokes’s brand—which activated the higher-order brain mechanisms. These associations, then, and not the chemical properties of the drink, gave Coke an advantage in the marketplace.
It is also interesting to consider the ways in which the frontal part of the brain is connected to the pleasure center. There is a dopamine link by which the front part of the brain projects and activates the pleasure centers. This is probably why Coke was liked more when the brand was known—the associations were more powerful, allowing the part of the brain that represents these associations to enhance activity in the brain’s pleasure center. This should be good news to any ad agency, of course, because it means that the bright red can, swirling script, and the myriad messages that have come down to consumers over the years (such as “Things go better with . . .”) are as much responsible for our love of Coke as the brown bubbly stuff itself.
EXPECTATIONS ALSO SHAPE stereotypes. A stereotype, after all, is a way of categorizing information, in the hope of predicting experiences. The brain cannot start from scratch at every new situation. It must build on what it has seen before. For that reason, stereotypes are not intrinsically malevolent. They provide shortcuts in our never-ending attempt to make sense of complicated surroundings. This is why we have the expectation that an elderly person will need help using a computer or that a student at Harvard will be intelligent.* (#litres_trial_promo) But because a stereotype provides us with specific expectations about members of a group, it can also unfavorably influence both our perceptions and our behavior.
Research on stereotypes shows not only that we react differently when we have a stereotype of a certain group of people, but also that stereotyped people themselves react differently when they are aware of the label that they are forced to wear (in psychological parlance, they are “primed” with this label). One stereotype of Asian-Americans, for instance, is that they are especially gifted in mathematics and science. A common stereotype of females is that they are weak in mathematics. This means that Asian-American women could be influenced by both notions.
In fact, they are. In a remarkable experiment, Margaret Shin, Todd Pittinsky, and Nalini Ambady asked Asian-American women to take an objective math exam. But first they divided the women into two groups. The women in one group were asked questions related to their gender. For example, they were asked about their opinions and preferences regarding coed dorms, thereby priming their thoughts for gender-related issues. The women in the second group were asked questions related to their race. These questions referred to the languages they knew, the languages they spoke at home, and their family’s history in the United States, thereby priming the women’s thoughts for race-related issues.
The performance of the two groups differed in a way that matched the stereotypes of both women and Asian-Americans. Those who had been reminded that they were women performed worse than those who had been reminded that they were Asian-American. These results show that even our own behavior can be influenced by our stereotypes, and that activation of stereotypes can depend on our current state of mind and how we view ourselves at the moment.
Perhaps even more astoundingly, stereotypes can also affect the behavior of people who are not even part of a stereotyped group. In one notable study, John Bargh, Mark Chen, and Lara Burrows had participants complete a scrambled-sentence task, rearranging the order of words to form sentences (we discussed this type of task in Chapter 4). For some of the participants, the task was based on words such as aggressive, rude, annoying, and intrude. For others, the task was based on words such as honor, considerate, polite, and sensitive. The goal of these two lists was to prime the participants to think about politeness or rudeness as a result of constructing sentences from these words (this is a very common technique in social psychology, and it works amazingly well).
After the participants completed the scrambled-sentence task, they went to another laboratory to participate in what was purportedly a second task. When they arrived at the second laboratory, they found the experimenter apparently in the midst of trying to explain the task to an uncomprehending participant who was just not getting it (this supposed participant was in fact not a real participant but a confederate working for the experimenter). How long do you think it took the real participants to interrupt the conversation and ask what they should do next?
The amount of waiting depended on what type of words had been involved in the scrambled-sentence task. Those who had worked with the set of polite words patiently waited for about 9.3 minutes before they interrupted, whereas those who had worked with the set of rude words waited only about 5.5 minutes before interrupting.
A second experiment tested the same general idea by priming the concept of the elderly, using words such as Florida, bingo, and ancient. After the participants in this experiment completed the scrambled-sentence task, they left the room, thinking that they had finished the experiment—but in fact the crux of the study was just beginning. What truly interested the researchers was how long it would take the participants to walk down the hallway as they left the building. Sure enough, the participants in the experimental group were affected by the “elderly” words: their walking speed was considerably slower than that of a control group who had not been primed. And remember, the primed participants were not themselves elderly people being reminded of their frailty—they were undergraduate students at NYU.
ALL THESE EXPERIMENTS teach us that expectations are more than the mere anticipation of a boost from a fizzy Coke. Expectations enable us to make sense of a conversation in a noisy room, despite the loss of a word here and there, and likewise, to be able to read text messages on our cell phones, despite the fact that some of the words are scrambled. And although expectations can make us look foolish from time to time, they are also very powerful and useful.
So what about our football fans and the game-winning pass? Although both friends were watching the same game, they were doing so through markedly different lenses. One saw the pass as in bounds. The other saw it as out. In sports, such arguments are not particularly damaging—in fact, they can be fun. The problem is that these same biased processes can influence how we experience other aspects of our world. These biased processes are in fact a major source of escalation in almost every conflict, whether Israeli-Palestinian, American-Iraqi, Serbian-Croatian, or Indian-Pakistani.
In all these conflicts, individuals from both sides can read similar history books and even have the same facts taught to them, yet it is very unusual to find individuals who would agree about who started the conflict, who is to blame, who should make the next concession, etc. In such matters, our investment in our beliefs is much stronger than any affiliation to sport teams, and so we hold on to these beliefs tenaciously. Thus the likelihood of agreement about “the facts” becomes smaller and smaller as personal investment in the problem grows. This is clearly disturbing. We like to think that sitting at the same table together will help us hammer out our differences and that concessions will soon follow. But history has shown us that this is an unlikely outcome; and now we know the reason for this catastrophic failure.
But there’s reason for hope. In our experiments, tasting beer without knowing about the vinegar, or learning about the vinegar after the beer was tasted, allowed the true flavor to come out. The same approach should be used to settle arguments: The perspective of each side is presented without the affiliation—the facts are revealed, but not which party took which actions. This type of “blind” condition might help us better recognize the truth.
When stripping away our preconceptions and our previous knowledge is not possible, perhaps we can at least acknowledge that we are all biased. If we acknowledge that we are trapped within our perspective, which partially blinds us to the truth, we may be able to accept the idea that conflicts generally require a neutral third party—who has not been tainted with our expectations—to set down the rules and regulations. Of course, accepting the word of a third party is not easy and not always possible; but when it is possible, it can yield substantial benefits. And for that reason alone, we must continue to try.
Reflections on Expectations: Music and Food
Imagine walking into a truck stop off a deserted stretch of Interstate 95 at nine o’clock in the evening. You’ve been driving for six hours. You are tired and still have a long drive ahead of you. You need a bite to eat and want to be out of the car for a bit, so you walk into what appears to be a restaurant of sorts. It has the usual cracked-vinyl-covered booths and fluorescent lighting. The coffee-stained tabletops leave you a bit wary. Still, you think, “Fine, no one can screw up a hamburger that badly.” You reach for the menu, conveniently stashed behind an empty napkin dispenser, only to discover this is no ordinary greasy spoon. Instead of hamburgers and chicken sandwiches, you’re astonished to see that the menu offers foie gras au torchon, truffle pâté with frisée and fennel marmalade, gougères with duck confit, quail à la crapaudine, and so on.
Items like this would be no surprise in even a small Manhattan restaurant, of course. And it is possible that the chef got tired of Manhattan, moved to the middle of nowhere, and now cooks for whoever happens through. So is there a key difference between ordering gougères with duck confit in Manhattan and ordering it at an isolated truck stop on I-95? If you encountered such French delicacies at the truck stop, would you be brave enough to try them? Suppose the prices were not listed on the menu. What would you be willing to pay for an appetizer or an entrée? And if you ate it, would you enjoy it as much as you might if you were eating the same food in Manhattan?
On the basis of what we learned from Chapter 10, the answers are simple. Ambience and expectations do add a great deal to our enjoyment. You would expect less in such an environment, and as a consequence you would enjoy the experience at the truck stop less, even if you had the identical foie gras au torchon in both places. Likewise, if you knew that pâté is largely made of run-of-the-mill goose liver and butter* (#litres_trial_promo) rather than super special ingredients, you would enjoy it much less.
A FEW YEARS ago the folks at the Washington Post were curious about the same basic topic and decided to run an experiment.