Skip to content

Why Your Devices Shouldn’t Do the Work of Being You

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

In my post last week I linked to some work by Evan Selinger, a philosopher at the Rochester Institute of Technology who has been thinking hard about the ways seemingly convenient and harmless technologies affect their users. I Skyped Selinger the other day and we spoke about the impact that current and soon-to-be gadgets are having on people’s personal autonomy. What follows is an edited version of that conversation. This is the first in an occasional series of interviews with thinkers doing important work in and around the subject of autonomy and the changes it is undergoing.


DB: Apple’s upcoming iOS will have a predictive text feature that goes beyond spell check. It will analyze your emails and texts and use that to make guesses about what you’re going to write next. In other words, it will suggest words and sentences for your texts and emails. That sounds convenient and harmless. Yet you’re alarmed by it. Why?

ES: I’m horrified by this, to be honest with you. What worries me about this is that this will seem like a cool feature to most users. So that rather than needing to fill out my thoughts to you, I’ll say something good enough, that was recommended. And to put in the energy and effort to override a good-enough [phrase], you have to overcome a certain amount of inertia. It will require extra effort to do that. And so I think there’s going to be a natural temptation to rely on that tool rather than override it. The more we don’t autonomously struggle with language, grapple to find the right word, muscle through to bend language poetically, the less we’re able to really treat conversation as an intentional act. As something that really expresses what we’re trying to say. And as goes the iPhone so goes the rest of the world, right? The LA Times when they redesigned their online version, each piece begins with three tweetable summaries. And they do it [above the article, so you can tweet without even reading it and deciding what you think matters in it]. Are successful tweeters going to use this? Probably not. But the fact that this is becoming more embedded in the architecture, that’s what concerns me. I believe we’re starting to find more and more cases where what we want to communicate to people will be automated. There are more opportunities to automate that.

DB: But the end result of these apps is very likely going to be same as it would have been if a human had done the work herself. That’s why predictive text works, because it can make a good guess about what you’re going to say. So why not offload the work to an app?

ES: Except predicting you is predicting a predictable you. Which is itself subtracting from your autonomy. And it’s encouraging you to be predictable, to be a facsimile of yourself. So it’s prediction and a nudge at the same moment. It’s not just a guessing game—”here’s what I think you would say.” It’s providing you the option to [go with the prediction]. And imposing a cost of energy to override.

DB: But if the prediction is good, because the analysis is really astute, what’s the harm?

ES: I guess the slogan answer here would be something like “effort is the currency of care.” And by effort I mean a deliberate focussed presence. When we abdicate that, we inject less care into a relationship. That’s what I think automation does. And that’s what I think some of these people leave out of the equation.

DB: But no one is imposing these apps on people. If you don’t want to use predictive text to write your email, you can turn it off. If that video of JIBO the family robot reading to the kid creeps you out, just don’t buy one. What’s the problem?

ES: Once it’s available, it’s hard to have the willpower to override something. Especially when we think it’s convenient and harmless [because our model is] spell checkers and calculators. [But with those] we outsource cognitive tasks, not intimate ones. Relationships are different.

DB: Still, it’s hard to imagine why people would want more friction in their lives than they have to have. What principle could they use to sort out techs that help from techs that harm?

ES: ES: [The philosopher] Albert Borgmann distinguishes between the “device paradigm” and “focal practices.” The device paradigm turns things into commodities—into things that are ubiquitous, easy and require no effort or understanding. I get in my car, drive off, I have no idea how it works. I live in this environment that gives me everything I want while it requires very little of me. Little by way of skill, by way of understanding. And that is supposed to be the good life. His point is that we’ve been so disburdened of effort through the device paradigm, we’re incentivized to put less effort into our lives.

And we’re told this is the apex, this is eudaemonia.

Borgmann thinks it is completely the other way around. That we only find real meaning in our lives in these instances where we’re focused and attentive and building up skill. In a focal practice there isn’t a separation of means from ends. These are activities where the journey is as important as the destination. It calls forth skill in a way that we feel a sense of accomplishment when we do it. And it gives us a memorable sense of experience. It gives us a vivid sense of experience. It gives us a connected sense of experience. He says, for example, that rather than running with headphones you should run while paying attention to your body and your posture and your breathing and you’re taking in the environment.

DB: But the selling point for tech is supposed to exactly this: By automating these repetitive aspects of life (the text you send all the time, the work of thinking about dinner and getting it assembled) you have more time for focal experiences.

ES: So what we want to do when we’re not burdened by crap is care about stuff. [Trouble is] this is the thing that precisely prevents us from being able to care about stuff. The ads say “we’ll automate this task and you go spend time caring” but we’re building the infrastructure so you can’t.

DB: So what can people do to protect their autonomy—their ability to be their engaged and unpredictable selves?

ES: The best that we can do is become more [alert] to the values that are embedded in these systems. And ask what kinds of people we become if we become dependent on them. If we become habituated to them, if our relationships become more mediated by them. What makes this so complicated is that there are no bad actors. No one is out there trying to degrade the quality of our lives. It’s that their agendas are small but collectively these small agendas can have a profound impact on who we are.

Follow me on Twitter: @davidberreby

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related

Up Next