Once a week.
Subscribe to our weekly newsletter.
Three Big Problems With Facebook Activism
“Slacktivism” online is exactly as deep as the paper-thin knowledge and commitment that fuels it.
BY REBECCA TEICH (guest blogger)
Many of us have fallen victim to it: changing our profile picture to those white equals signs atop a red background because someone said that it meant you support marriage equality, sharing the now-infamous #Kony2012 video that no one ever watched in full, or reposting the Huffington Post article only because the title was too witty and relevant not to.
From warring perspectives on the conflict in Gaza to the now strangely dated hashtag #bringbackourgirls, the viral social issue of the hour floods Twitter, Instagram, and Facebook with content that looks, on the outside, like deeply felt social activism. But for all the pathos running rampant over news feeds and blogging sites, there is little depth to speak of, and virtually no change afoot in the real world. “Slacktivism” online is exactly as deep as the paper-thin knowledge and commitment that fuels it.
Social Gain vs. Social Change
Social media might be said to revolutionize political activism, connecting us to like-minded peers in previously inconceivable ways. The hive is easier to stir than ever before. But these technologies have a much darker side. Facebook activism amplifies harmful underpinnings of capitalism. It drastically alters how we conceive of ourselves. And ironically, Facebook does harm to the social causes offline that we champion online. Why? Social media platforms transform social issues into cultural capital: issues become labels of political alignment and lend an appearance of social awareness attached to a digitally curated self. They become a means to the end of social gain, rather than of social change.
Through social media, we engage in personal branding. We cultivate a name and image that we can manipulate for social gain: “likes,” retweets, comments, and shares—rather than real change on the ground—become our primary goal. We choose how we desire to be seen by others and then manipulate that artificial “self” in accord with our known, or desired, audience.
No self-presentation through social media can be fully genuine. The prospect of social rewards always taints that decision-making process. Individuals cultivate their amplified selves on such platforms by sharing a given set of signifiers to attach to their “profile” through the sharing of news articles, the act of ‘liking’ pages, or re-posting other people’s writings. There is a hyper-awareness of our image in the eyes of others; whether consciously or not, our profiles become a self-promoting narrative.
The Perils of "Slacktivism"
And the end-goal of this online “activism” is typically limited to raising awareness. As valuable as it is to widen people’s understanding of the world, no tangible change flows from awareness alone. In addition, many online activist campaigns reveal their true colors when they raise awareness of convenient untruths.
Last year we saw massive numbers of our Facebook friends change their profile pictures to a red equals sign to support marriage equality, which inadvertently served as mass-advertising for the organization that uses the emblem as its logo (with a few color changes from time to time). What these Facebook users might not care to know is that the Human Rights Committee (HRC), the organization behind the logo, has been subject to devastating criticism from the LGBTQ+ community. The HRC, Derrick Clifton writes, represents a “well-off, able-bodied, gender conforming, non-immigrant and white” audience that ignores problems of racial injustice in the LGBTQ+ community and has “a long history of throwing trans people under the bus.” Few users adopting the logo as their own profile picture had any idea they were promoting not only a political position but also a specific (an deeply flawed) organization.
Most people jumping at the chance to use the hashtag #bringbackourgirls had little to no knowledge of the history and politics of the country in which they obliquely advocated foreign intervention. And they no clue that many Nigerians not residing in America are opposed to US intervention due to a history of the negative effects of US foreign aid and meddling there.
These examples of “slacktivist” rebellion from current events are prevalent within social media, especially (but not exclusively) among the liberal class who claim to advocate for social justice. The irony lies in the fact that when the labels of “rebel” enters popular culture and “trendiness,” it becomes conformity. The idea of rebellion becomes another commodified modifier to one’s online self. “Rebellion” acts as a signifier to denote a sense of global awareness and a self-directed, educated position within the subject matter. Despite the appearance of rebellion in this public display of a seemingly more radical opinion, the individual is doing just the opposite. We are always keenly aware of our audience; often that audience is one of similar opinion, as that audience is comprised of “friends” or “followers.”
Individuals craft their public selves and accompanying opinions to obtain social reward from a positive response from their followship. Social issues and critique become buzzwords or clickbait. They function as modifiers for that online public self, and lose their rebellious force. Those issues become objects used to accumulate cultural capital in exchange for social reward. In this process it becomes apparent that both the public self and the social issues become commodified to achieve an end reward that’s external to the function and existence of the commodity.
This isn’t to say that all that happens on these platforms is negative. With this new form of media and communication, there are many liberating and redeeming qualities that arise from these platforms, including the newfound ability to bridge conversational gaps and the opportunity for a larger number of people to engage in a conversation and disseminate knowledge and opinions relatively freely. Social media is fast, easy, cheap and, in one sense, democratic.
But there is the corrupting matter of money. Facebook shareholders’ bottom line is not how much social change the site inspires. No, social media sites are profit-maximizing corporations, as all those ads and “sponsored” content in our newsfeeds remind us. Social media sites, and even some social movements, should not be misunderstood as fully public. There is censorship involved, either by internal community policing or external policing from the platform to ensure a profit, making sure that voices are in line with an ideology that benefits themselves. In addition, it requires a critical eye both in terms of what we consume and what we put out because anything displayed on social media platforms is going to be mass-consumed. We must be aware of the way we, consciously or subconsciously, manipulate how we are portrayed such that it does not serve to hinder and devalue issues that require selflessness.
We must also foster awareness for the way these platforms we engage with have profit-based agendas of their own. A blind progression into social media activism is extremely harmful. This new medium is greatly influenced by hegemonic structures that surround it and ought to be the target of critique rather than the foundation of dissemination.
This is not a call to block off social media as an outlet for exchange. Instead, this newfound presence of hijacking the pressing issues of our time for our own personal gain requires of us to reevaluate how we get involved and participate in this new form of interaction. It’s a call to think more critically about the way information is exchanged and portrayed and to redirect activism in a direction that remains truer to its cause.
Rebecca Teich is a recent graduate of Bard High School Early College where she received both an Associate of Arts degree from Bard College and a New York High School Regents Diploma. She will be attending Columbia University in the fall where she intends to pursue studies in English and Philosophy.
Image credit: Shutterstock.com
New anthropological research suggests our ancestors enjoyed long slumbers.
- Neanderthal bone fragments discovered in northern Spain mimic hibernating animals like cave bears.
- Thousands of bone fragments, dating back 400,000 years, were discovered in this "pit of bones" 30 years ago.
- The researchers speculate that this physiological function, if true, could prepare us for extended space travel.
Humans have a terrible sense of time. We think in moments, not eons, which accounts for a number of people that still don't believe in evolutionary theory: we simply can't imagine ourselves any differently than we are today.
Thankfully, scientists and researchers have vast imaginations. Their findings often depend on creative problem-solving. Anthropologists are especially adept at this skill, as their job entails imagining a prehistoric world in which humans and our forebears were very different creatures.
A new paper, published in the journal L'Anthropologie, takes a hard look at ancient bone health and arrives at a surprising conclusion: Neanderthals (and possibly early humans) might have endured long, harsh winters by hibernating.
Adaptability is the key to survival. Certain endotherms evolved the ability to depress their metabolism for months at a time; their body temperature and metabolic rate lowered while their breathing and heart rate dropped to nearly imperceptible levels. This handy technique solved a serious resource management problem, as food supplies were notoriously scarce during the frozen months.
While today the wellness industry eschews fat, it has long had an essential evolutionary function: it keeps us alive during times of food scarcity. As autumn months pass, large mammals become hyperphagic (experiencing intense hunger followed by overeating) and store nutrients in fat deposits; smaller animals bury food nearby for when they need a snack. This strategy is critical as hibernating animals can lose over a quarter of their body weight during winter.
For this paper, Antonis Bartsiokas and Juan-Luis Arsuaga, both in the Department of History and Ethnology at Democritus University of Thrace, scoured through remains of a "pit of bones" in northern Spain. In 1976, archaeologists found a 50-foot shaft leading down into a cave in Atapuerca, where thousands of bone fragments have since been discovered. Dating back 400,000 years—some of the fragments may be as old as 600,000 years—researchers believe the bodies were intentionally buried in this cave.
Evidence of ancient human hibernation / human hibernation for space travel | Dr Antonis Bartsiokas
While the fragments have been well studied in the intervening decades, Arsuaga (who led an early excavation in Atapuerca) and Bartsiokas noticed something odd about the bones: they displayed signs of seasonal variations. These proto-humans appear to have experienced annual bone growth disruption, which is indicative of hibernating species.
In fact, the remains of cave bears were also found in this pit, increasing the likelihood that the burial site was reserved for species that shared common features. This could be the result of a dearth of food for bears and Neanderthals alike. The researchers write that modern northerners don't need to sleep for months at a time; an abundance of fish and reindeer didn't exist in Spain, as they do in the Arctic. They write,
"The aridification of Iberia then could not have provided enough fat-rich food for the people of Sima during the harsh winter—making them resort to cave hibernation."
The notion of hibernating humans is appealing, especially to those in cold climates, but some experts don't want to put the cart before the horse. Large mammals don't engage in textbook hibernation; their deep sleep is known as a "torpor." Even then, the demands of human-sized brains could have been too large for extended periods of slumber.
Still, as we continually discover our animalistic origins to better understand how we evolved, the researchers note the potential value of this research.
"The present work provides an innovative approach to the physiological mechanisms of metabolism in early humans that could help determine the life cycle and physiology of extinct human species."
Bartsiokas speculates that this ancient mechanism could be coopted for space travel in the future. If the notion of hibernating humans sounds far-fetched, the idea has been contemplated for years, as NASA began funding research on this topic in 2014. As the saying goes, everything old is new again.
Stay in touch with Derek on Twitter and Facebook. His new book is "Hero's Dose: The Case For Psychedelics in Ritual and Therapy."
It is impossible for science to arrive at ultimate truths, but functional truths are good enough.
- What is truth? This is a very tricky question, trickier than many would like to admit.
- Science does arrive at what we can call functional truth, that is, when it focuses on what something does as opposed to what something is. We know how gravity operates, but not what gravity is, a notion that has changed over time and will probably change again.
- The conclusion is that there are not absolute final truths, only functional truths that are agreed upon by consensus. The essential difference is that scientific truths are agreed upon by factual evidence, while most other truths are based on belief.
Does science tell the truth? The answer to this question is not as simple as it seems, and my 13.8 colleague Adam Frank took a look at it in his article about the complementarity of knowledge. There are many levels of complexity to what truth is or means to a person or a community. Why?
First, "truth" itself is hard to define or even to identify. How do you know for sure that someone is telling you the truth? Do you always tell the truth? In groups, what may be considered true to a culture with a given set of moral values may not be true in another. Examples are easy to come by: the death penalty, abortion rights, animal rights, environmentalism, the ethics of owning weapons, etc.
At the level of human relations, truth is very convoluted. Living in an age where fake news has taken center stage only corroborates this obvious fact. However, not knowing how to differentiate between what is true and what is not leads to fear, insecurity, and ultimately, to what could be called worldview servitude — the subservient adherence to a worldview proposed by someone in power. The results, as the history of the 20th century has shown extensively, can be catastrophic.
Proclamations of final or absolute truths, even in science, shouldn't be trusted.
The goal of science, at least on paper, is to arrive at the truth without recourse to any belief or moral system. Science aims to go beyond the human mess so as to be value-free. The premise here is that Nature doesn't have a moral dimension, and that the goal of science is to describe Nature the best possible way, to arrive at something we could call the "absolute truth." The approach is a typical heir to the Enlightenment notion that it is possible to take human complications out of the equation and have an absolute objective view of the world. However, this is a tall order.
It is tempting to believe that science is the best pathway to truth because, to a spectacular extent, science does triumph at many levels. You trust driving your car because the laws of mechanics and thermodynamics work. NASA scientists and engineers just managed to have the Ingenuity Mars Helicopter — the first man-made device to fly over another planet — hover above the Martian surface all by itself.
We can use the laws of physics to describe the results of countless experiments to amazing levels of accuracy, from the magnetic properties of materials to the position of your car in traffic using GPS locators. In this restricted sense, science does tell the truth. It may not be the absolute truth about Nature, but it's certainly a kind of pragmatic, functional truth at which the scientific community arrives by consensus based on the shared testing of hypotheses and results.
What is truth?
Credit: Sergey Nivens / 242235342
But at a deeper level of scrutiny, the meaning of truth becomes intangible, and we must agree with the pre-Socratic philosopher Democritus who declared, around 400 years BCE, that "truth is in the depths." (Incidentally, Democritus predicted the existence of the atom, something that certainly exists in the depths.)
A look at a dictionary reinforces this view. "Truth: the quality of being true." Now, that's a very circular definition. How do we know what is true? A second definition: "Truth: a fact or belief that is accepted as true." Acceptance is key here. A belief may be accepted to be true, as is the case with religious faith. There is no need for evidence to justify a belief. But note that a fact as well can be accepted as true, even if belief and facts are very different things. This illustrates how the scientific community arrives at a consensus of what is true by acceptance. Sufficient factual evidence supports that a statement is true. (Note that what defines sufficient factual evidence is also accepted by consensus.) At least until we learn more.
Take the example of gravity. We know that an object in free fall will hit the ground, and we can calculate when it does using Galileo's law of free fall (in the absence of friction). This is an example of "functional truth." If you drop one million rocks from the same height, the same law will apply every time, corroborating the factual acceptance of a functional truth, that all objects fall to the ground at the same rate irrespective of their mass (in the absence of friction).
But what if we ask, "What is gravity?" That's an ontological question about what gravity is and not what it does. And here things get trickier. To Galileo, it was an acceleration downward; to Newton a force between two or more massive bodies inversely proportional to the square of the distance between them; to Einstein the curvature of spacetime due to the presence of mass and/or energy. Does Einstein have the final word? Probably not.
Is there an ultimate scientific truth?
Final or absolute scientific truths assume that what we know of Nature can be final, that human knowledge can make absolute proclamations. But we know that this can't really work, for the very nature of scientific knowledge is that it is incomplete and contingent on the accuracy and depth with which we measure Nature with our instruments. The more accuracy and depth our measurements gain, the more they are able to expose the cracks in our current theories, as I illustrated last week with the muon magnetic moment experiments.
So, we must agree with Democritus, that truth is indeed in the depths and that proclamations of final or absolute truths, even in science, shouldn't be trusted. Fortunately, for all practical purposes — flying airplanes or spaceships, measuring the properties of a particle, the rates of chemical reactions, the efficacy of vaccines, or the blood flow in your brain — functional truths do well enough.
Using urinals, psychological collages, and animated furniture to shock us into reality.
- Dada is a provocative and surreal art movement born out of the madness of World War I.
- Tzara, a key Dada theorist, says Dada seeks "to confuse and upset, to shake and jolt" people from their comfort zones.
- Dada, as all avant-garde art, faces a key problem in how to stay true to its philosophy.
In a world gone mad, what can the few sane people left do? What can someone say when there are no words that seem up to the job? How can anyone hope to express ideas so terrible when doing so will only reduce those ideas?
These are some of the things that inspired the Dada movement, and in its absurd, surreal, and chaotic nonsense, we find the voice of the voiceless.
The origin of Dadaism
Dada was a response to the madness of World War I. Reasonable, intelligent, and sensitive people looked at the blood and mud graveyards of the trenches and wondered how any meaning or goodness could ever be found again. How can someone make sense of a world where millions of young, happy, hopeful men were scythed down in a spray of bullets? How could life go back to normal when returning soldiers, blinded and disfigured from gas, lay homeless in the streets? Out of this awful revulsion, there came one bitter voice, and it said: "Everything is nonsense."
Dada is the art of the nihilist. It smashes accepted wisdom, challenges norms and values, and offends, upsets, and provokes us to re-examine everything.
And so, the Dada movement expressed itself in absurdity. Tzara, the closest you get to a Dadaist philosopher, put it like this: "Like everything in life, Dada is useless. Dada is without pretension, as life should be." Dada rejects all systems, all philosophy, all definite answers, and all truth. It is the living embrace of contradictions and nonsense. It seeks "to confuse and upset people, to shake and jolt". It aims to shout down the "shamefaced sex of comfortable compromise and good manners," when actually "everything happens in a completely idiotic way."
In short, Dada is a response to the world when all the usual methods have broken down. It's the recognition that dinner party conversations, Hollywood blockbusters, and Silicon Valley are not how life actually is. This is a false reality and order, like some kind of veneer.
The Dada response to life is to embrace the personal and passionate madness of it all, where "the intensity of a personality is transposed directly, clearly into the work." It's to recognize the unique position of an artist, who can convey ideas and feelings in a way that goes beyond normal understanding. Art goes straight to the soul, but the intensity of it all can be hard to "enjoy" in the strictest sense.
Where is this Dada?
For instance, Dada is seen in the poems of Hugo Ball who wrote in meaningless foreign-sounding words. It's in Hausmann, who wrote works in disconnected phonemes. It's found in Duchamp's iconoclastic "Fountain" that sought to question what art or an artist really meant. It's in Hans Richter's short film "Ghost before Breakfast," which has an incoherent montage of images, loosely connected by the theme of inanimate objects in revolt. And, it's in Kurt Schwitters' "psychological collages" which present fragments of objects, juxtaposed together.
Dada is intended to shock. It's an artistic jolt asking, or demanding, that the viewers reorient themselves in some way. It is designed to make us feel uncomfortable and does not make for easy appreciation. It's only when we're thrown so drastically outside of our comfort zone in this way that Dada asks us to question how things are. It shakes us out of a conformist stupor to look afresh at things.
The paradox of Dadaism
Of course, like all avant-garde art, Dada needs to address one major problem: how do you stay so provocative, so radical, and so anti-establishment when you also seek success? How can maverick rebels stay so as they get a mortgage and want a good school for their kids? The problem is that young, inventive, and idealistic artists are inevitably sucked into the world of profit and commodity.
As Grayson Perry, a British modern artist, wrote: "What starts as a creative revolt soon becomes co-opted as the latest way to make money," and what was once fresh and challenging "falls away to reveal a predatory capitalist robot." With Dada, how long can someone actually live in a world of nonsense and nihilistic absurdity?
But there will always be new blood to keep movements like Dada going. As the revolutionaries of yesterday become the rich mansion-owners of today, there will be hot, young things to come and take up the mantle. There will always be something to challenge and questions to be asked. So, art movements like Dada will always be in the vanguard.
Dada is the art of the nihilist. It smashes accepted wisdom, challenges norms and values, and offends, upsets, and provokes us to re-examine everything. It's an absurd art form that reflects the reality it perceives — that life is nothing more than a dissonant patchwork of egos floating in an abyss of nothing.