If a Life Falls in the Forest and No One Is There to LIKE it, Does It Make a Sound?
Comes a time when you realize, you’re basically a straight-up Luddite, or at best the loyal opposition to the social media age, and you might as well embrace that and shout your point fearlessly into the tailwind of techno-utopianism.
To that end, I’ve been thinking about the posting addict, or the Facebook and Twitter compulsive. You know who you are…and if you aren’t. These are the people who share constantly, scores of times a day, if not more. Usually, the posts and tweets are ephemeral. They tell us what the Sharing Addict had for breakfast, or how the traffic was.
Children and their endearing malapropisms are good Facebook fodder. You’ll get lots of likes when you use family conversations as material for posts to amuse people (some of whom your kids have never met), and applauding comments about how brilliant and/or cute they are. Emotions online are sung about two octaves higher than the score of reality.
What accounts for compulsive posting, and is it a problem?
I’ve asked a few people why they post a lot, when it’s not expected for their professional lives, or for political activism. They usually answer that it takes so little time, and that it’s a good, efficient way to stay in touch, at least a little. And, they find it enjoyable to see who likes the comment, or comments on the comment.
I accept them at their word. But that mild, pro-social impulse doesn’t really explain why I’m seeing pictures of someone’s bowl of soup on my Facebook page, or why some don’t dare to eat a peach, as T.S. Eliot might observe, without letting Twitter know.
I wonder, to recall the koan, if a life falls in a forest and there’s no one there to LIKE it, does it make a sound?
I fear some social media mutation on vanity and voyeurism, one that compels people to look at themselves doing things, even crimes, and even sex, and to have an audience look at them as they do their living, such that they simply can’t stop. It feels too good, this admiring and looking at themselves, this ongoing curation of the self in the motion of doing its normal, quotidian activities, so that it becomes a needful thing.
Otherwise, how to explain the compulsion toward banal sharing. Or, the inscrutable impulse for rapists to self-incriminate by posting details of their assault online (more troublingly, perhaps they feel they’ve done nothing wrong), and young women who agree to have their boyfriends tape them having sex—which, as sure as night follows day, becomes fodder to be posted on social media “revenge” sites against them when, as sure as night follows day, the relationship ends?
I get it, in one sense. Looking at memories of things done, even things just done two seconds ago, can be more satisfying than the event itself, since the living of life often demands a great deal of energy from us, even when we’re enjoying ourselves, but the memory of living that life demands much less.
Maybe social media addicts mostly wanted reassurance, affirmation, and figurative “likes” from their real-life friends in the pre-Facebook age. They had a vanity jones, in other words, perhaps grounded in insecurity (if they’re good looking, we tend to say they’re insecure) or perhaps grounded in malignant narcissism (if they’re not good looking, we tend to say they’re arrogant). Now, the compulsive poster doesn’t need as many face to face confidants to satisfy the vanity jones.
An hours-long, ongoing, spontaneous conversation with a friend is a laborious dissertation in comparison to the rebus-like declarations and telegraphic effusions or take-downs of Twitter. Social media is a more efficient, fast-food affirmation.
Is all this a problem? Most likely, yes.
A woman who was part of a group of tight-knit friends in college innocently strolls through Facebook one day, and gets hit over the head with the emotional two-by-four of having to see a photo of all of her tight-knit group of friends at a reunion—everyone but her.
Let’s say you have a friend, actually a close friend, and one you’ve known for a really long time. Before Facebook, you saw each other fairly regularly.
It’s not the same now. Clearly, the close friend is still around, and still has time to do fun, social things. You know this, because she posts obsessively. She posts pictures of herself with other friends, at parties. She posts photos of the food she’s about to eat. She posts self-admiring updates about marriage, her husband, and family.
What used to be known as bragging is now the currency of a social life.
It’s possible that you’ve alienated the friend, or that she’s bored with the friendship. It’s also possible that Facebook and Twitter create the simulacra of contact, such that a friend genuinely feels as if she’s dispatched a social duty by posting about her commute, or another post to remind her followers, some of whom barely know her, about how much she loves her husband.
In her mind this might constitute being “in touch” with friends.
If you think this, be warned: many of us don’t consider the reading of these posts an act of being in touch with you. Not at all.
Social media is reconfiguring some of the basic concepts that undergird friendship, concepts of obligation, reciprocity, contact, availability, exclusivity, intimacy, and, in the examples here, good friendship hygiene.
I don’t want to be the authenticity fogey. Nor am I arguing that social media isn’t in some ways quite real. I also like it, in its place.
But it lacks many of what have been for millennia the signature features of friendship and social bonding. It lacks depth: the information shared is too public and often trivial for that. It lacks selectivity: all friends, be they people the poster has never met, or spoken to, or parents and former best friends, are getting the same information, assuming that the poster, as most often seems to be the case, isn’t using a private group setting. If everyone is a friend, then no one is a friend. Social media lacks one on one intimacy, by its nature. It lacks privacy and discretion: a personal Greek chorus witnesses your communication. And it lacks that messy, delicate, unpredictable but friendship-sustaining quality of entanglement: When you’re at dinner with a friend, you can’t as easily walk away or flip off the smartphone when things get boring, uncomfortable, or socially taxing.
And more to the point: why in the world do you think we want to see a photo of your soup?
A large new study uses an online game to inoculate people against fake news.
- Researchers from the University of Cambridge use an online game to inoculate people against fake news.
- The study sample included 15,000 players.
- The scientists hope to use such tactics to protect whole societies against disinformation.
Researchers hope the technology will further our understanding of the brain, but lawmakers may not be ready for the ethical challenges.
- Researchers at the Yale School of Medicine successfully restored some functions to pig brains that had been dead for hours.
- They hope the technology will advance our understanding of the brain, potentially developing new treatments for debilitating diseases and disorders.
- The research raises many ethical questions and puts to the test our current understanding of death.
The image of an undead brain coming back to live again is the stuff of science fiction. Not just any science fiction, specifically B-grade sci fi. What instantly springs to mind is the black-and-white horrors of films like Fiend Without a Face. Bad acting. Plastic monstrosities. Visible strings. And a spinal cord that, for some reason, is also a tentacle?
But like any good science fiction, it's only a matter of time before some manner of it seeps into our reality. This week's Nature published the findings of researchers who managed to restore function to pigs' brains that were clinically dead. At least, what we once thought of as dead.
What's dead may never die, it seems
The researchers did not hail from House Greyjoy — "What is dead may never die" — but came largely from the Yale School of Medicine. They connected 32 pig brains to a system called BrainEx. BrainEx is an artificial perfusion system — that is, a system that takes over the functions normally regulated by the organ. The pigs had been killed four hours earlier at a U.S. Department of Agriculture slaughterhouse; their brains completely removed from the skulls.
BrainEx pumped an experiment solution into the brain that essentially mimic blood flow. It brought oxygen and nutrients to the tissues, giving brain cells the resources to begin many normal functions. The cells began consuming and metabolizing sugars. The brains' immune systems kicked in. Neuron samples could carry an electrical signal. Some brain cells even responded to drugs.
The researchers have managed to keep some brains alive for up to 36 hours, and currently do not know if BrainEx can have sustained the brains longer. "It is conceivable we are just preventing the inevitable, and the brain won't be able to recover," said Nenad Sestan, Yale neuroscientist and the lead researcher.
As a control, other brains received either a fake solution or no solution at all. None revived brain activity and deteriorated as normal.
The researchers hope the technology can enhance our ability to study the brain and its cellular functions. One of the main avenues of such studies would be brain disorders and diseases. This could point the way to developing new of treatments for the likes of brain injuries, Alzheimer's, Huntington's, and neurodegenerative conditions.
"This is an extraordinary and very promising breakthrough for neuroscience. It immediately offers a much better model for studying the human brain, which is extraordinarily important, given the vast amount of human suffering from diseases of the mind [and] brain," Nita Farahany, the bioethicists at the Duke University School of Law who wrote the study's commentary, told National Geographic.
An ethical gray matter
Before anyone gets an Island of Dr. Moreau vibe, it's worth noting that the brains did not approach neural activity anywhere near consciousness.
The BrainEx solution contained chemicals that prevented neurons from firing. To be extra cautious, the researchers also monitored the brains for any such activity and were prepared to administer an anesthetic should they have seen signs of consciousness.
Even so, the research signals a massive debate to come regarding medical ethics and our definition of death.
Most countries define death, clinically speaking, as the irreversible loss of brain or circulatory function. This definition was already at odds with some folk- and value-centric understandings, but where do we go if it becomes possible to reverse clinical death with artificial perfusion?
"This is wild," Jonathan Moreno, a bioethicist at the University of Pennsylvania, told the New York Times. "If ever there was an issue that merited big public deliberation on the ethics of science and medicine, this is one."
One possible consequence involves organ donations. Some European countries require emergency responders to use a process that preserves organs when they cannot resuscitate a person. They continue to pump blood throughout the body, but use a "thoracic aortic occlusion balloon" to prevent that blood from reaching the brain.
The system is already controversial because it raises concerns about what caused the patient's death. But what happens when brain death becomes readily reversible? Stuart Younger, a bioethicist at Case Western Reserve University, told Nature that if BrainEx were to become widely available, it could shrink the pool of eligible donors.
"There's a potential conflict here between the interests of potential donors — who might not even be donors — and people who are waiting for organs," he said.
It will be a while before such experiments go anywhere near human subjects. A more immediate ethical question relates to how such experiments harm animal subjects.
Ethical review boards evaluate research protocols and can reject any that causes undue pain, suffering, or distress. Since dead animals feel no pain, suffer no trauma, they are typically approved as subjects. But how do such boards make a judgement regarding the suffering of a "cellularly active" brain? The distress of a partially alive brain?
The dilemma is unprecedented.
Setting new boundaries
Another science fiction story that comes to mind when discussing this story is, of course, Frankenstein. As Farahany told National Geographic: "It is definitely has [sic] a good science-fiction element to it, and it is restoring cellular function where we previously thought impossible. But to have Frankenstein, you need some degree of consciousness, some 'there' there. [The researchers] did not recover any form of consciousness in this study, and it is still unclear if we ever could. But we are one step closer to that possibility."
She's right. The researchers undertook their research for the betterment of humanity, and we may one day reap some unimaginable medical benefits from it. The ethical questions, however, remain as unsettling as the stories they remind us of.
Many governments do not report, or misreport, the numbers of refugees who enter their country.
SMARTER FASTER trademarks owned by The Big Think, Inc. All rights reserved.