Why Your Devices Shouldn't Do the Work of Being You

In my post last week I linked to some work by Evan Selinger, a philosopher at the Rochester Institute of Technology who has been thinking hard about the ways seemingly convenient and harmless technologies affect their users. I Skyped Selinger the other day and we spoke about the impact that current and soon-to-be gadgets are having on people's personal autonomy. What follows is an edited version of that conversation. This is the first in an occasional series of interviews with thinkers doing important work in and around the subject of autonomy and the changes it is undergoing.


DB: Apple's upcoming iOS will have a predictive text feature that goes beyond spell check. It will analyze your emails and texts and use that to make guesses about what you're going to write next. In other words, it will suggest words and sentences for your texts and emails. That sounds convenient and harmless. Yet you're alarmed by it. Why?

ES: I'm horrified by this, to be honest with you. What worries me about this is that this will seem like a cool feature to most users. So that rather than needing to fill out my thoughts to you, I'll say something good enough, that was recommended. And to put in the energy and effort to override a good-enough [phrase], you have to overcome a certain amount of inertia. It will require extra effort to do that. And so I think there's going to be a natural temptation to rely on that tool rather than override it. The more we don't autonomously struggle with language, grapple to find the right word, muscle through to bend language poetically, the less we're able to really treat conversation as an intentional act. As something that really expresses what we're trying to say. And as goes the iPhone so goes the rest of the world, right? The LA Times when they redesigned their online version, each piece begins with three tweetable summaries. And they do it [above the article, so you can tweet without even reading it and deciding what you think matters in it]. Are successful tweeters going to use this? Probably not. But the fact that this is becoming more embedded in the architecture, that’s what concerns me. I believe we’re starting to find more and more cases where what we want to communicate to people will be automated. There are more opportunities to automate that.

DB: But the end result of these apps is very likely going to be same as it would have been if a human had done the work herself. That's why predictive text works, because it can make a good guess about what you're going to say. So why not offload the work to an app?

ES: Except predicting you is predicting a predictable you. Which is itself subtracting from your autonomy. And it’s encouraging you to be predictable, to be a facsimile of yourself. So it's prediction and a nudge at the same moment. It’s not just a guessing game—"here’s what I think you would say." It’s providing you the option to [go with the prediction]. And imposing a cost of energy to override.

DB: But if the prediction is good, because the analysis is really astute, what's the harm?

ES: I guess the slogan answer here would be something like "effort is the currency of care." And by effort I mean a deliberate focussed presence. When we abdicate that, we inject less care into a relationship. That's what I think automation does. And that's what I think some of these people leave out of the equation.

DB: But no one is imposing these apps on people. If you don't want to use predictive text to write your email, you can turn it off. If that video of JIBO the family robot reading to the kid creeps you out, just don't buy one. What's the problem?

ES: Once it’s available, it's hard to have the willpower to override something. Especially when we think it's convenient and harmless [because our model is] spell checkers and calculators. [But with those] we outsource cognitive tasks, not intimate ones. Relationships are different.

DB: Still, it's hard to imagine why people would want more friction in their lives than they have to have. What principle could they use to sort out techs that help from techs that harm?

ES: ES: [The philosopher] Albert Borgmann distinguishes between the "device paradigm" and "focal practices." The device paradigm turns things into commodities—into things that are ubiquitous, easy and require no effort or understanding. I get in my car, drive off, I have no idea how it works. I live in this environment that gives me everything I want while it requires very little of me. Little by way of skill, by way of understanding. And that is supposed to be the good life. His point is that we've been so disburdened of effort through the device paradigm, we're incentivized to put less effort into our lives.

And we're told this is the apex, this is eudaemonia.

Borgmann thinks it is completely the other way around. That we only find real meaning in our lives in these instances where we're focused and attentive and building up skill. In a focal practice there isn't a separation of means from ends. These are activities where the journey is as important as the destination. It calls forth skill in a way that we feel a sense of accomplishment when we do it. And it gives us a memorable sense of experience. It gives us a vivid sense of experience. It gives us a connected sense of experience. He says, for example, that rather than running with headphones you should run while paying attention to your body and your posture and your breathing and you're taking in the environment.

DB: But the selling point for tech is supposed to exactly this: By automating these repetitive aspects of life (the text you send all the time, the work of thinking about dinner and getting it assembled) you have more time for focal experiences.

ES: So what we want to do when we're not burdened by crap is care about stuff. [Trouble is] this is the thing that precisely prevents us from being able to care about stuff. The ads say "we'll automate this task and you go spend time caring" but we're building the infrastructure so you can't.

DB: So what can people do to protect their autonomy—their ability to be their engaged and unpredictable selves?

ES: The best that we can do is become more [alert] to the values that are embedded in these systems. And ask what kinds of people we become if we become dependent on them. If we become habituated to them, if our relationships become more mediated by them. What makes this so complicated is that there are no bad actors. No one is out there trying to degrade the quality of our lives. It's that their agendas are small but collectively these small agendas can have a profound impact on who we are.

Follow me on Twitter: @davidberreby

Big Think Edge
  • The meaning of the word 'confidence' seems obvious. But it's not the same as self-esteem.
  • Confidence isn't just a feeling on your inside. It comes from taking action in the world.
  • Join Big Think Edge today and learn how to achieve more confidence when and where it really matters.
Videos
  • Prejudice is typically perpetrated against 'the other', i.e. a group outside our own.
  • But ageism is prejudice against ourselves — at least, the people we will (hopefully!) become.
  • Different generations needs to cooperate now more than ever to solve global problems.


Yale scientists restore brain function to 32 clinically dead pigs

Researchers hope the technology will further our understanding of the brain, but lawmakers may not be ready for the ethical challenges.

Still from John Stephenson's 1999 rendition of Animal Farm.
Surprising Science
  • Researchers at the Yale School of Medicine successfully restored some functions to pig brains that had been dead for hours.
  • They hope the technology will advance our understanding of the brain, potentially developing new treatments for debilitating diseases and disorders.
  • The research raises many ethical questions and puts to the test our current understanding of death.

The image of an undead brain coming back to live again is the stuff of science fiction. Not just any science fiction, specifically B-grade sci fi. What instantly springs to mind is the black-and-white horrors of films like Fiend Without a Face. Bad acting. Plastic monstrosities. Visible strings. And a spinal cord that, for some reason, is also a tentacle?

But like any good science fiction, it's only a matter of time before some manner of it seeps into our reality. This week's Nature published the findings of researchers who managed to restore function to pigs' brains that were clinically dead. At least, what we once thought of as dead.

What's dead may never die, it seems

The researchers did not hail from House Greyjoy — "What is dead may never die" — but came largely from the Yale School of Medicine. They connected 32 pig brains to a system called BrainEx. BrainEx is an artificial perfusion system — that is, a system that takes over the functions normally regulated by the organ. The pigs had been killed four hours earlier at a U.S. Department of Agriculture slaughterhouse; their brains completely removed from the skulls.

BrainEx pumped an experiment solution into the brain that essentially mimic blood flow. It brought oxygen and nutrients to the tissues, giving brain cells the resources to begin many normal functions. The cells began consuming and metabolizing sugars. The brains' immune systems kicked in. Neuron samples could carry an electrical signal. Some brain cells even responded to drugs.

The researchers have managed to keep some brains alive for up to 36 hours, and currently do not know if BrainEx can have sustained the brains longer. "It is conceivable we are just preventing the inevitable, and the brain won't be able to recover," said Nenad Sestan, Yale neuroscientist and the lead researcher.

As a control, other brains received either a fake solution or no solution at all. None revived brain activity and deteriorated as normal.

The researchers hope the technology can enhance our ability to study the brain and its cellular functions. One of the main avenues of such studies would be brain disorders and diseases. This could point the way to developing new of treatments for the likes of brain injuries, Alzheimer's, Huntington's, and neurodegenerative conditions.

"This is an extraordinary and very promising breakthrough for neuroscience. It immediately offers a much better model for studying the human brain, which is extraordinarily important, given the vast amount of human suffering from diseases of the mind [and] brain," Nita Farahany, the bioethicists at the Duke University School of Law who wrote the study's commentary, told National Geographic.

An ethical gray matter

Before anyone gets an Island of Dr. Moreau vibe, it's worth noting that the brains did not approach neural activity anywhere near consciousness.

The BrainEx solution contained chemicals that prevented neurons from firing. To be extra cautious, the researchers also monitored the brains for any such activity and were prepared to administer an anesthetic should they have seen signs of consciousness.

Even so, the research signals a massive debate to come regarding medical ethics and our definition of death.

Most countries define death, clinically speaking, as the irreversible loss of brain or circulatory function. This definition was already at odds with some folk- and value-centric understandings, but where do we go if it becomes possible to reverse clinical death with artificial perfusion?

"This is wild," Jonathan Moreno, a bioethicist at the University of Pennsylvania, told the New York Times. "If ever there was an issue that merited big public deliberation on the ethics of science and medicine, this is one."

One possible consequence involves organ donations. Some European countries require emergency responders to use a process that preserves organs when they cannot resuscitate a person. They continue to pump blood throughout the body, but use a "thoracic aortic occlusion balloon" to prevent that blood from reaching the brain.

The system is already controversial because it raises concerns about what caused the patient's death. But what happens when brain death becomes readily reversible? Stuart Younger, a bioethicist at Case Western Reserve University, told Nature that if BrainEx were to become widely available, it could shrink the pool of eligible donors.

"There's a potential conflict here between the interests of potential donors — who might not even be donors — and people who are waiting for organs," he said.

It will be a while before such experiments go anywhere near human subjects. A more immediate ethical question relates to how such experiments harm animal subjects.

Ethical review boards evaluate research protocols and can reject any that causes undue pain, suffering, or distress. Since dead animals feel no pain, suffer no trauma, they are typically approved as subjects. But how do such boards make a judgement regarding the suffering of a "cellularly active" brain? The distress of a partially alive brain?

The dilemma is unprecedented.

Setting new boundaries

Another science fiction story that comes to mind when discussing this story is, of course, Frankenstein. As Farahany told National Geographic: "It is definitely has [sic] a good science-fiction element to it, and it is restoring cellular function where we previously thought impossible. But to have Frankenstein, you need some degree of consciousness, some 'there' there. [The researchers] did not recover any form of consciousness in this study, and it is still unclear if we ever could. But we are one step closer to that possibility."

She's right. The researchers undertook their research for the betterment of humanity, and we may one day reap some unimaginable medical benefits from it. The ethical questions, however, remain as unsettling as the stories they remind us of.

Scientists see 'rarest event ever recorded' in search for dark matter

The team caught a glimpse of a process that takes 18,000,000,000,000,000,000,000 years.

Image source: Pixabay
Surprising Science
  • In Italy, a team of scientists is using a highly sophisticated detector to hunt for dark matter.
  • The team observed an ultra-rare particle interaction that reveals the half-life of a xenon-124 atom to be 18 sextillion years.
  • The half-life of a process is how long it takes for half of the radioactive nuclei present in a sample to decay.
Keep reading Show less