Panera Bread Is Getting Less Personal!
As I've said before, my home away from home is Panera Bread. It has a fireplace with real gas logs, couches, and a leisurely and rather personal environment. It's a lot like Cheers, but with beer replaced by coffee. So it's not a place to escape from the wife and kids; they're as welcome as anyone. It's is a place to bring your laptop or tablet or whatever and do your work, blurring, as so much of life does these days for we who do "intellectual labor,", the distinction between leisure or play and being productive. Cheers, of course, was a joyful respite from your job, and even Frasier couldn't take his "scholarly productivity" seriously with a buzz on.
I know, from talking to the people who work at Panera, how much of what the cashiers and so forth do and say is scripted from the home office in St. Louis. But they still manage to be real people with whom you can have personal conversations. And so I've found out how admirable those employees are, and how much they need their jobs. I could tell you stuff about their spouses and kids and hopes and dreams.
But now Panera is replacing CASHIERS with KIOSKS. Reasons: There's a minor "slow-service" issue, which amounts, in my experience, to a five-minute wait on occasion. Plus, sometimes real people get orders wrong. That has happened to me at Panera, if pretty rarely. The mistake, however, is always remedied promptly and cheerfully by a real peson.
This change I can't believe in is part of a pervasive techno-capitalist trend. First, you reduce the work of a real person to a script, and then you replace the person by a machine. The members of the "cognitive elite" who do mental labor at the corporate headquarters remove as much discretion and imagination from what the employees at a particular store do as possible. Once their work becomes something like that of a cog in a machine, they can be replaced by machines.
The article says, not surprisingly, that other "fast-food" establishments are also replacing people with screens and operating systems--like those that make up a tablet.
We can see here why this and probably future economic recoveries will be jobless. And why, as libertarian futurist Tyler Cowen predicts, more and more people will be thought of as marginally productive or not really productive at all.
Don't get me wrong. I'm all for economic and technological progress. But we have to be alert to personal costs and act accordingly.
I'm not saying the remedy comes from the kind of redistributive taxation recommended by the Marxist (who apparently never read Marx) du jour Thomas Piketty. Piketty writes way too much from the moralistic point of view—not really shared by Marx—that the highly productive members of the cognitive don't deserve their money. Money and what it can buy are, in fact, what they deserve.
But I might say that it's getting harder to justify low taxes and deregulation as empowering "job creators."
One libertarian responded to this article by saying that the real problem is the minimum wage. Every time you raise it, the incentive is there is to replace costly people by discount machines. If we paid people what they're worth in terms of productivity, employers would be more inclined to use more of them. That's true to a limited extent.
But surely you wouldn't want to follow that line of thinking to the extreme Marx did. He predicted that wages for most people would be reduced to subsistence and nothing more. They would be paid exactly what it takes to keep them alive.
The guys at Panera make a bit more than that. But cutting their wages a bit really wouldn't save them from from being replaced by more reliable machines.
Consider that the most stunningly efficient and error-free operation around might be the Amazon warehouse. I read somewhere that those warehouses used to have close to two hundred employees. Now robotics have both increased reliability and productivity and cut the number of people working in the warehouse down to less than twenty.
A large new study uses an online game to inoculate people against fake news.
- Researchers from the University of Cambridge use an online game to inoculate people against fake news.
- The study sample included 15,000 players.
- The scientists hope to use such tactics to protect whole societies against disinformation.
Researchers hope the technology will further our understanding of the brain, but lawmakers may not be ready for the ethical challenges.
- Researchers at the Yale School of Medicine successfully restored some functions to pig brains that had been dead for hours.
- They hope the technology will advance our understanding of the brain, potentially developing new treatments for debilitating diseases and disorders.
- The research raises many ethical questions and puts to the test our current understanding of death.
The image of an undead brain coming back to live again is the stuff of science fiction. Not just any science fiction, specifically B-grade sci fi. What instantly springs to mind is the black-and-white horrors of films like Fiend Without a Face. Bad acting. Plastic monstrosities. Visible strings. And a spinal cord that, for some reason, is also a tentacle?
But like any good science fiction, it's only a matter of time before some manner of it seeps into our reality. This week's Nature published the findings of researchers who managed to restore function to pigs' brains that were clinically dead. At least, what we once thought of as dead.
What's dead may never die, it seems
The researchers did not hail from House Greyjoy — "What is dead may never die" — but came largely from the Yale School of Medicine. They connected 32 pig brains to a system called BrainEx. BrainEx is an artificial perfusion system — that is, a system that takes over the functions normally regulated by the organ. The pigs had been killed four hours earlier at a U.S. Department of Agriculture slaughterhouse; their brains completely removed from the skulls.
BrainEx pumped an experiment solution into the brain that essentially mimic blood flow. It brought oxygen and nutrients to the tissues, giving brain cells the resources to begin many normal functions. The cells began consuming and metabolizing sugars. The brains' immune systems kicked in. Neuron samples could carry an electrical signal. Some brain cells even responded to drugs.
The researchers have managed to keep some brains alive for up to 36 hours, and currently do not know if BrainEx can have sustained the brains longer. "It is conceivable we are just preventing the inevitable, and the brain won't be able to recover," said Nenad Sestan, Yale neuroscientist and the lead researcher.
As a control, other brains received either a fake solution or no solution at all. None revived brain activity and deteriorated as normal.
The researchers hope the technology can enhance our ability to study the brain and its cellular functions. One of the main avenues of such studies would be brain disorders and diseases. This could point the way to developing new of treatments for the likes of brain injuries, Alzheimer's, Huntington's, and neurodegenerative conditions.
"This is an extraordinary and very promising breakthrough for neuroscience. It immediately offers a much better model for studying the human brain, which is extraordinarily important, given the vast amount of human suffering from diseases of the mind [and] brain," Nita Farahany, the bioethicists at the Duke University School of Law who wrote the study's commentary, told National Geographic.
An ethical gray matter
Before anyone gets an Island of Dr. Moreau vibe, it's worth noting that the brains did not approach neural activity anywhere near consciousness.
The BrainEx solution contained chemicals that prevented neurons from firing. To be extra cautious, the researchers also monitored the brains for any such activity and were prepared to administer an anesthetic should they have seen signs of consciousness.
Even so, the research signals a massive debate to come regarding medical ethics and our definition of death.
Most countries define death, clinically speaking, as the irreversible loss of brain or circulatory function. This definition was already at odds with some folk- and value-centric understandings, but where do we go if it becomes possible to reverse clinical death with artificial perfusion?
"This is wild," Jonathan Moreno, a bioethicist at the University of Pennsylvania, told the New York Times. "If ever there was an issue that merited big public deliberation on the ethics of science and medicine, this is one."
One possible consequence involves organ donations. Some European countries require emergency responders to use a process that preserves organs when they cannot resuscitate a person. They continue to pump blood throughout the body, but use a "thoracic aortic occlusion balloon" to prevent that blood from reaching the brain.
The system is already controversial because it raises concerns about what caused the patient's death. But what happens when brain death becomes readily reversible? Stuart Younger, a bioethicist at Case Western Reserve University, told Nature that if BrainEx were to become widely available, it could shrink the pool of eligible donors.
"There's a potential conflict here between the interests of potential donors — who might not even be donors — and people who are waiting for organs," he said.
It will be a while before such experiments go anywhere near human subjects. A more immediate ethical question relates to how such experiments harm animal subjects.
Ethical review boards evaluate research protocols and can reject any that causes undue pain, suffering, or distress. Since dead animals feel no pain, suffer no trauma, they are typically approved as subjects. But how do such boards make a judgement regarding the suffering of a "cellularly active" brain? The distress of a partially alive brain?
The dilemma is unprecedented.
Setting new boundaries
Another science fiction story that comes to mind when discussing this story is, of course, Frankenstein. As Farahany told National Geographic: "It is definitely has [sic] a good science-fiction element to it, and it is restoring cellular function where we previously thought impossible. But to have Frankenstein, you need some degree of consciousness, some 'there' there. [The researchers] did not recover any form of consciousness in this study, and it is still unclear if we ever could. But we are one step closer to that possibility."
She's right. The researchers undertook their research for the betterment of humanity, and we may one day reap some unimaginable medical benefits from it. The ethical questions, however, remain as unsettling as the stories they remind us of.
Many governments do not report, or misreport, the numbers of refugees who enter their country.
SMARTER FASTER trademarks owned by The Big Think, Inc. All rights reserved.