Why You Don't Have to Be Rational to Run Your Own Life
David Berreby is the author of "Us and Them: The Science of Identity." He has written about human behavior and other science topics for The New Yorker, The New York Times Magazine, Slate, Smithsonian, The New Republic, Nature, Discover, Vogue and many other publications. He has been a Visiting Scholar at the University of Paris, a Science Writing Fellow at the Marine Biological Laboratory, a resident at Yaddo, and in 2006 was awarded the Erving Goffman Award for Outstanding Scholarship for the first edition of "Us and Them." David can be found on Twitter at @davidberreby and reached by email at david [at] davidberreby [dot] com.
Are we becoming too obsessed with the idea that people can't think straight? When I began blogging here at BigThink five years ago, I would have said no. After all, for the most part, economists and many social scientists still operate on the assumption that people are rational—that we can, whenever we choose, make decisions by consciously processing real facts through a logical calculator in our heads. (And, the economists would add, that our goal in these calculations is always to maximize our personal share of some measurable benefit, like money or square footage or fur coats). This is pretty obviously not what real people do, so why blame people for talking up the facts? Over the past couple of years, though, I've noticed loose talk about human mental incapacity being used to justify assaults on personal autonomy. If we can't think straight, after all, it follows that we need "help." And much of this "help" consists of taking choices away from human beings and giving them to organizations, machines or software.
Some examples: Once a human being called a boss would decide who would work which shifts at the local coffee shop. Today, Starbucks and many other retail chains are using algorithms to schedule workers, which is great for the bottom line (why pay more people than you need to if you can predict that traffic will be light this Thursday?). Medium, the hot new writing site, is paying some writers and editors according to the amount of time readers spend on their material. This makes for better metrics on the precise relationship between the content and the response. Or consider this technology, now being used in high school gyms in Dubuque, Iowa: It monitors students' heart rates directly, via strapped-on monitors on each kid, to make sure they are exercising enough in class. Then there is this gizmo, described by my fellow-BigThink blogger Teodora Zareva, which delivers fines, electric shocks and social-media humiliations if you do not comply with your own goals. No doubt this is much more effective than just telling yourself you should get to the gym more often.
None of these things are evil in intent. They are not inherently evil in intent. Instead, they're benign. Like the government "nudging" about which I've written a fair amount (for example, here and here) these helpful technologies are aimed at making life easier and literally more profitable. As users of such technology, most people are delighted. Each individual decision-making aid seems so reasonable and sensible, offering reliability and "seamlessness" in the place of irrationality and friction. It's only as targets of technology (the worker whose schedule is machine-written, the kid who's tired and wants to slow down while running laps) that we feel annoyed. The rhetoric of irrationality is a powerful balm against such vexation. You know, it says, you can't trust yourself.
Many of us (not all) would argue that autonomy, the process of self-governance, is valuable. It is, after all, the theoretical basis of our civil rights. So how are we supposed to preserve that autonomy in the face of evidence that machines and organizations and apps are better at making our decisions than we are?
One way would be to deny the whole problem. In 2012, my fellow BigThink blogger Steve Mazie argued that claims about our inability to reason were overblown. The other day he reran that post here and restated his argument here. He thinks the basis for perceptions of human irrationality are exotic laboratory manipulations that have little to do with real life. It is easy to support this claim with some cherry-picked examples of truly odd-sounding experiments. Few of us are ever confronted with examples of the "Linda problem" or the Wason test.
However, Mazie doesn't mention a host of other experiments that document "irrational" behavior in situations that are quite natural and familiar to people. The ultimatum game, for example, is a negotiation in which two people have to decide how to split up money or some other valuable. Dividing up something between two people is a negotiation we all engage in during our lives, from the playground to the edge of the grave. Concerned about results that come only from experiments on people in WEIRD (Western, Educated, Industrial, Rich and Democratic) societies, Joe Henrich and his colleagues have run this experiment on many different continents with a many different kinds of people. Almost no one (with the occasional exception of students recently trained in economics) does the "rational" thing in that game. I think Mazie is right that casual talk about irrationality has gotten out of hand. But I don't think it's because there is no "there" there.
And so we have a problem: Personal autonomy has been defended for more than a century by the principle that people are rational when they choose to be. This principle seems to be false. At the same time, practical challenges to autonomy—what the philosopher Evan Selinger calls the "outsourcing" of humanity to governments, machines and apps—are growing. How is autonomy to be defended?
I think the answer is this: Decouple the defense of autonomy from the claim that people are rational. Instead of defending the notion that people will make good decisions if they are free, I'd rather argue that the quality of their decisions is irrelevant. It's the process of making them that matters. We don't want to outsource that process to an institution, a company or a machine because doing so makes us value ourselves, and our humanity, less. The process of wrestling with yourself over the gym is part of being a person, whichever way it turns out. The process of scheduling workers (and dealing with their sighs and sulks and protests) is part of what it means to be in a community, and work with other people. Machines and nudges can make more of our experiences "seamless" and efficient, but hold up, we need the seams.
Perhaps this is hopeless, in the face of the seductions of our gadgets, marketing campaigns and the "psychological state" that increasingly nudges us. But isn't the erosion of personal autonomy is worth resisting?
Follow me on Twitter: @davidberreby
Why self-control makes your life better, and how to get more of it.
(Photo by Geem Drake/SOPA Images/LightRocket via Getty Images)
- Research demonstrates that people with higher levels of self-control are happier over both the short and long run.
- Higher levels of self-control are correlated with educational, occupational, and social success.
- It was found that the people with the greatest levels of self-control avoid temptation rather than resist it at every turn.
It turns out the human scalp has an olfactory receptor that seems to play a crucial role in regulating hair follicle growth and death.
The best self-directed learners use these seven habits to improve their knowledge and skills in any subject.
- Bill Gates, Mark Zuckerberg, and Ellen DeGeneres all dropped out of college, yet they became leaders in their fields. Their secret? Self-directed learning.
- Self-directed learning can help people expand their knowledge, gain new skills, and improve upon their liberal education.
- Following habits like Benjamin Franklin's five-hour rule, the 80/20 rule, and SMART goals can help self-directed learners succeed in their pursuits.
SMARTER FASTER trademarks owned by The Big Think, Inc. All rights reserved.