What is Big Think?  

We are Big Idea Hunters…

We live in a time of information abundance, which far too many of us see as information overload. With the sum total of human knowledge, past and present, at our fingertips, we’re faced with a crisis of attention: which ideas should we engage with, and why? Big Think is an evolving roadmap to the best thinking on the planet — the ideas that can help you think flexibly and act decisively in a multivariate world.

A word about Big Ideas and Themes — The architecture of Big Think

Big ideas are lenses for envisioning the future. Every article and video on bigthink.com and on our learning platforms is based on an emerging “big idea” that is significant, widely relevant, and actionable. We’re sifting the noise for the questions and insights that have the power to change all of our lives, for decades to come. For example, reverse-engineering is a big idea in that the concept is increasingly useful across multiple disciplines, from education to nanotechnology.

Themes are the seven broad umbrellas under which we organize the hundreds of big ideas that populate Big Think. They include New World Order, Earth and Beyond, 21st Century Living, Going Mental, Extreme Biology, Power and Influence, and Inventing the Future.

Big Think Features:

12,000+ Expert Videos

1

Browse videos featuring experts across a wide range of disciplines, from personal health to business leadership to neuroscience.

Watch videos

World Renowned Bloggers

2

Big Think’s contributors offer expert analysis of the big ideas behind the news.

Go to blogs

Big Think Edge

3

Big Think’s Edge learning platform for career mentorship and professional development provides engaging and actionable courses delivered by the people who are shaping our future.

Find out more
Close

Why You Don't Have to Be Rational to Run Your Own Life

August 25, 2014, 10:00 AM
Confused_robot__jennifer_m._bean__scifi_fantasy_art

Are we becoming too obsessed with the idea that people can't think straight? When I began blogging here at BigThink five years ago, I would have said no. After all, for the most part, economists and many social scientists still operate on the assumption that people are rational—that we can, whenever we choose, make decisions by consciously processing real facts through a logical calculator in our heads. (And, the economists would add, that our goal in these calculations is always to maximize our personal share of some measurable benefit, like money or square footage or fur coats). This is pretty obviously not what real people do, so why blame people for talking up the facts? Over the past couple of years, though, I've noticed loose talk about human mental incapacity being used to justify assaults on personal autonomy. If we can't think straight, after all, it follows that we need "help." And much of this "help" consists of taking choices away from human beings and giving them to organizations, machines or software.

Some examples: Once a human being called a boss would decide who would work which shifts at the local coffee shop. Today, Starbucks and many other retail chains are using algorithms to schedule workers, which is great for the bottom line (why pay more people than you need to if you can predict that traffic will be light this Thursday?). Medium, the hot new writing site, is paying some writers and editors according to the amount of time readers spend on their material. This makes for better metrics on the precise relationship between the content and the response. Or consider this technology, now being used in high school gyms in Dubuque, Iowa: It monitors students' heart rates directly, via strapped-on monitors on each kid, to make sure they are exercising enough in class. Then there is this gizmo, described by my fellow-BigThink blogger Teodora Zareva, which delivers fines, electric shocks and social-media humiliations if you do not comply with your own goals. No doubt this is much more effective than just telling yourself you should get to the gym more often.

None of these things are evil in intent. They are not inherently evil in intent. Instead, they're benign. Like the government "nudging" about which I've written a fair amount (for example, here and here) these helpful technologies are aimed at making life easier and literally more profitable. As users of such technology, most people are delighted. Each individual decision-making aid seems so reasonable and sensible, offering reliability and "seamlessness" in the place of irrationality and friction. It's only as targets of technology (the worker whose schedule is machine-written, the kid who's tired and wants to slow down while running laps) that we feel annoyed. The rhetoric of irrationality is a powerful balm against such vexation. You know, it says, you can't trust yourself.

Many of us (not all) would argue that autonomy, the process of self-governance, is valuable. It is, after all, the theoretical basis of our civil rights. So how are we supposed to preserve that autonomy in the face of evidence that machines and organizations and apps are better at making our decisions than we are?

One way would be to deny the whole problem. In 2012, my fellow BigThink blogger Steve Mazie argued that claims about our inability to reason were overblown. The other day he reran that post here and restated his argument here. He thinks the basis for perceptions of human irrationality are exotic laboratory manipulations that have little to do with real life. It is easy to support this claim with some cherry-picked examples of truly odd-sounding experiments. Few of us are ever confronted with examples of the "Linda problem" or the Wason test.

However, Mazie doesn't mention a host of other experiments that document "irrational" behavior in situations that are quite natural and familiar to people. The ultimatum game, for example, is a negotiation in which two people have to decide how to split up money or some other valuable. Dividing up something between two people is a negotiation we all engage in during our lives, from the playground to the edge of the grave. Concerned about results that come only from experiments on people in WEIRD (Western, Educated, Industrial, Rich and Democratic) societies, Joe Henrich and his colleagues have run this experiment on many different continents with a many different kinds of people. Almost no one (with the occasional exception of students recently trained in economics) does the "rational" thing in that game. I think Mazie is right that casual talk about irrationality has gotten out of hand. But I don't think it's because there is no "there" there.

And so we have a problem: Personal autonomy has been defended for more than a century by the principle that people are rational when they choose to be. This principle seems to be false. At the same time, practical challenges to autonomy—what the philosopher Evan Selinger calls the "outsourcing" of humanity to governments, machines and apps—are growing. How is autonomy to be defended?

I think the answer is this: Decouple the defense of autonomy from the claim that people are rational. Instead of defending the notion that people will make good decisions if they are free, I'd rather argue that the quality of their decisions is irrelevant. It's the process of making them that matters. We don't want to outsource that process to an institution, a company or a machine because doing so makes us value ourselves, and our humanity, less. The process of wrestling with yourself over the gym is part of being a person, whichever way it turns out. The process of scheduling workers (and dealing with their sighs and sulks and protests) is part of what it means to be in a community, and work with other people. Machines and nudges can make more of our experiences "seamless" and efficient, but hold up, we need the seams.

Perhaps this is hopeless, in the face of the seductions of our gadgets, marketing campaigns and the "psychological state" that increasingly nudges us. But isn't the erosion of personal autonomy is worth resisting?

Follow me on Twitter: @davidberreby

 

Why You Don't Have to Be Ra...

Newsletter: Share: