Technology usually has more pros than cons, but every benefit still carries some risk.
- A new paper in Nature Human Behaviour states that technology is not making us dumber.
- The authors believe smart technology changes how we engage our biological cognitive abilities.
- While fears are likely overblown, technology addiction and memory problems still need to be addressed.
It seems that every major scientific or technological advancement is immediately labeled "dangerous" by critics. The printing press was going to destroy our memory. Pasteur's groundbreaking work was followed by an anti-vaccine movement. Radio was going to destroy society; then it was television; then, the internet. Pushback against progress appears inevitable.
Of course, technology is usually morally indifferent. Smartphones can be used to video call your grandparents or order illegal drugs. How we use technology is what matters.
Smart tech, dumb people
The newest fear is that smart technology (smartphones, computers, tablets, the internet of things, etc.) is supposedly making us dumber. But a trio of authors, led by Lorenzo Cecutti, argue in a new paper that smart technology is not turning us into dummies.
According to co-author Anthony Chemero, the idea that smartphones and digital technology damage our biological cognitive abilities is not backed up by science. Instead, he claims that we are developing different relationships to cognition due to smart devices. "What smartphones and digital technology seem to do instead is to change the ways in which we engage our biological cognitive abilities."
The team cites research that found that using digital technology as an external memory system doesn't take into account the fact that short-term effects do not necessarily indicate long-term changes to cognitive functioning. They write, "Relying on external tools when they are available is not the same as losing the ability to engage internal processes when necessary."
Other research disagrees with that conclusion. The famous London cab driver study showed that cabbies had larger hippocampi and better memory than non-drivers. Other research shows that GPS reduces spatial awareness and mental mapping. Studies such as these indicate that — as the cliché goes — if you don't use it, you lose it.
Photo: ikostudio / Adobe Stock
More good than bad
Still, the authors are correct that fear about the dangers of technology is overblown. Technology generally makes life better. In particular, the team considered five ways in which smart technologies are especially useful:
- Complexity. Fields such as data visualization, financial accounting, and statistical analysis all benefited from the speed and accuracy of technology.
- Reliance and skill. Advances in computational ability free up cognitive resources so that coders and data scientists can focus on understanding data and building better programs.
- External access --> Freed capacity. The internet offers far greater access to information than any previous technology. Because we don't need to memorize, our mental resources are free to be used for creativity or learning other things.
- Flexibility. People can freely choose what information to memorize and what to offload.
- Self-insights and self-control. With so much information at our fingertips, we can choose what to focus our attention on. (However, this assumes that smart tech isn't addictive.)
There's always some risk
While the team ably counters the fear-mongering around the "dumbing down" of humanity through technology, they also seem a bit too enthusiastic about championing its advancements. Chemero concludes:
"You put all this technology together with a naked human brain and you get something that's smarter...and the result is that we, supplemented by our technology, are actually capable of accomplishing much more complex tasks than we could with our un-supplemented biological abilities."
That is certainly true, to a degree. But we should also be aware that every benefit comes with a cost.
Here's another thing to consider: what happens if your smartphone or the internet stops working?
Stay in touch with Derek on Twitter. His most recent book is "Hero's Dose: The Case For Psychedelics in Ritual and Therapy."
Counterintuitively, directly combating misinformation online can spread it further. A different approach is needed.
- Like the coronavirus, engaging with misinformation can inadvertently cause it to spread.
- Social media has a business model based on getting users to spend increasing amounts of time on their platforms, which is why they are hesitant to remove engaging content.
- The best way to fight online misinformation is to drown it out with the truth.
A year ago, the Center for Countering Digital Hate warned of the parallel pandemics — the biological contagion of COVID-19 and the social contagion of misinformation, aiding the spread of the disease. Since the outbreak of COVID-19, anti-vaccine accounts have gained 10 million new social media followers, while we have witnessed arson attacks against 5G masts, hospital staff abused for treating COVID patients, and conspiracists addressing crowds of thousands.
Many have refused to follow guidance issued to control the spread of the virus, motivated by beliefs in falsehoods about its origins and effects. The reluctance we see in some to get the COVID vaccine is greater amongst those who rely on social media rather than traditional media for their information. In a pandemic, lies cost lives, and it has felt like a new conspiracy theory has sprung up online every day.
How we, as social media users, behave in response to misinformation can either enable or prevent it from being seen and believed by more people.
The rules are different online
Credit: Pool via Getty Images
If a colleague mentions in the office that Bill Gates planned the pandemic, or a friend at dinner tells the table that the COVID vaccine could make them infertile, the right thing to do is often to challenge their claims. We don't want anyone to be left believing these falsehoods.
But digital is different. The rules of physics online are not the same as they are in the offline world. We need new solutions for the problems we face online.
Now, imagine that in order to reply to your friend, you must first hand him a megaphone so that everyone within a five-block radius can hear what he has to say. It would do more damage than good, but this is essentially what we do when we engage with misinformation online.
Think about misinformation as being like the coronavirus — when we engage with it, we help to spread it to everyone else with whom we come into contact. If a public figure with a large following responds to a post containing misinformation, they ensure the post is seen by hundreds of thousands or even millions of people with one click. Social media algorithms also push content into more users' newsfeeds if it appears to be engaging, so lots of interactions from users with relatively small followings can still have unintended negative consequences.
The trend of people celebrating and posting photos of themselves or loved ones receiving the vaccine has been far more effective than any attempt to disprove a baseless claim about Bill Gates or 5G mobile technology.
Additionally, whereas we know our friend from the office or dinner, most of the misinformation we see online will come from strangers. They often will be from one of two groups — true believers, whose minds are made up, and professional propagandists, who profit from building large audiences online and selling them products (including false cures). Both of these groups use trolling tactics, that is, seeking to trigger people to respond in anger, thus helping them reach new audiences and thereby gaming the algorithm.
On the day the COVID vaccine was approved in the UK, anti-vaccine activists were able to provoke pro-vaccine voices into posting about thalidomide, exposing new audiences to a reason to distrust the medical establishment. Those who spread misinformation understand the rules of the game online; it's time those of us on the side of enlightenment values of truth and science did too.
How to fight online misinformation
Of course, it is much easier for social media companies to take on this issue than for us citizens. Research from the Center for Countering Digital Hate and Anti-Vax Watch last month found that 65% of anti-vaccine content on social media is linked to just twelve individuals and their organizations. Were the platforms to simply remove the accounts of these superspreaders, it would do a huge amount to reduce harmful misinformation.
The problem is that social media platforms are resistant to do so. These businesses have been built by constantly increasing the amount of time users spend on their platforms. Getting rid of the creators of engaging content that has millions of people hooked is antithetical to the business model. It will require intervention from governments to force tech companies to finally protect their users and society as a whole.
So, what can the rest of us do, while we await state regulation?
Instead of engaging, we should be outweighing the bad with the good. Every time you see a piece of harmful misinformation, share advice or information from a trusted source, like the WHO or BBC, on the same subject. The trend of people celebrating and posting photos of themselves or loved ones receiving the vaccine has been far more effective than any attempt to disprove a baseless claim about Bill Gates or 5G mobile technology. In the attention economy that governs tech platforms, drowning out is a better strategy than rebuttal.
Imran Ahmed is CEO of the Center for Countering Digital Hate.
The independent news collective is teaching a new generation of journalists and citizens to spot the stories in plain sight.
On July 17, 2014, Malaysia Airlines Flight 17 (MH17) was shot down over eastern Ukraine. The attack occurred in an area controlled by pro-Russian separatists and seemed the result of a surface-to-air missile. Everyone on board was killed.
The event triggered public outrage and an international season of the blame game. Western Europe, led by the U.S. and Ukraine, pointed fingers at Russia, while Russia tried to pin the blame on Ukraine, going so far as to claim a Ukrainian military jet tailed the commercial aircraft immediately before the disaster.
Later that year, an investigative team put out a report linking the pro-Russian separatists to the Buk-M1 missile launcher likely responsible for the tragedy. The team compiled photos, satellite imagery, and video evidence to follow a missile transport from Donetsk to Snizhne immediately before the downing of the aircraft. They then confirmed the transport leaving the area later, sans one missile.
This report was not filed by an NGO or a legacy news organization like the New York Times or Wall Street Journal, nor did the team have access to insider or classified information. Its authors were a small, independent collective of researchers and citizen journalists called Bellingcat, and their information came from social media posts, Google Maps satellite imagery, and videos uploaded to YouTube. In other words, the facts were out in the open for anyone to see. Bellingcat simply knew where to look.
Following the data
Dutch Safety Board Chairman Tjibbe Joustra speaks in front of the MH17 wreckage to present its final report into the attack.
Bellingcat was founded by Eliot Higgins, a citizen journalist who gained online prominence investigating weapons smuggling during the Syrian war. The collective's report into the MH17 attack would serve as its breakthrough, and it would continue to improve our understanding of the tragic event and countering Russian disinformation.
Since then, Bellingcat has legally registered as a foundation in the Netherlands and has continued to unearth consequential details to some of the most important news stories of the last decade, including the Syrian War, the Christchurch mosque massacre, and the poisonings of Yulia Skripal and Alexei Navalny.
The foundation's model of journalism is known as "open-source investigation." According to Aric Toler, Bellingcat's director of research and training, it's less an overturning of investigative journalism than a "genre" within it. This type of investigation follows digital data trails that are freely available on the internet. The bread crumbs could be found in public records, media reports, photos on Twitter, or people silly enough to upload a video of themselves committing a crime on Parler.
"Bellingcat's rise reveals something new about our digitally mediated times: spying is no longer the preserve of nation states – anyone with an internet connection can do it. The balance between open and secret intelligence is shifting. The most useful stuff is often public," writes Luke Harding for the Guardian.
The vast amount of data available online allows Bellingcat's researchers to piece together timelines or connect seemingly disparate events to reveal their connective, underlying thread. In its investigation into the shooting of Ashli Babbitt, researchers created a timeline of radicalization through her social-media footprint; they also mapped her journey during the Capitol Riot by locating videos showing her in the crowd and comparing background details to publicly available floorplans of the U.S. Capitol Building.
Like a fussy math teacher, the foundation employs a "show-your-work approach" to maintain credibility, transparency, and back-of-the-book peeking. Each article or report meticulously presents its data points through links and images, building the trail of evidence crumb-by-crumb. By the end, readers have seen the same evidence as the researcher and can decide whether said evidence supports the researcher's conclusions.
Aware such evidence can sometimes vanish—either by the people who upload it or corporations fretting over public relations—Bellingcat has also gone to great lengths to archive and back up important data before they are lost.
Balancing clarity and caution
While today Bellingcat employs a small team of journalists and editors, it still relies on volunteers and citizen journalists willing to dedicate the time and effort to scrape the internet for leads.
This, Toler told us in our interview, is an advantage to Bellingcat's investigative methods. While traditional news outlets contend with shrinking budgets, less personnel, and more information to wrangle than ever, they simply lack the resources necessary to explore the deluge of data we call the internet. Conversely, Bellingcat can overcome these barriers by tapping into a pre-existing group of enthusiasts who thrive on a sense of devotion, interest, and personal satisfaction. And the more people who team up to solve a problem, the lighter the work becomes.
But there are challenges. "It's a double-edged sword. On the one hand, you have a clear gap in information. It's just not feasible for large outlets to cover this stuff to the degree it should be. But also, the people who do have time and do it, there's not as much responsibility on them, and who knows what they could do that causes harm," Toler said.
Consider the open-sourced nature of the evidence. Bellingcat's show-your-work approach is necessary for clarity and transparency, but it also creates a set of instructions for those looking to duplicate the formula. While Bellingcat maintains the guidelines of a traditional newsroom, others may not and bad actors could locate information Bellingcat deemed sensitive enough to redact and use it to harm others by, say, doxing.
"There's really no good solution because you can't control what the mob does. If someone is angry, they can dig into this stuff because it is open source, and if you give the transparency of how you got your stuff, then you can't avoid the fact that it can then be reproduced and found," Toler said.
Because of this, Bellingcat hopes to serve as a type of intermediary. Like a traditional newsroom, it vets its sources, sets up fail-safes to catch misinformation, and writes its reports to protect bystanders and prevent libel. It hopes these practices will serve as an example for citizen journalists to emulate. On the obverse, it aims to show established news outlets the power and reach of open-sourced investigative techniques and these online communities.
Recently, Bellingcat has worked to investigate the Jan.6 Capitol Riots.
Credit: Spencer Platt/Getty Images
Looking to the larger media landscape, Bellingcat doesn't see itself in competition with traditional news media. It views its position as one of cooperation. The foundation has worked with several news partners to investigate stories and promote its work, such as sharing the findings of its Riley June Williams investigation with NBC.
It also offers training workshops to teach open-source investigation. These are not only attended by journalists wishing to hone their skills but professionals like lawyers and finance managers looking to add these techniques to their trades. Because the foundation sees its methods as an extension of investigative journalism, not a replacement for it, it isn't looking to corner a market. Rather, it aims to evolve a profession to meet the challenges of its new 21st-century environment.
As Toler told us: "Journalism doesn't work one way or the other. It should be both. Do some open-source sleuthing to compliment and boost your on-the-ground reporting.
"Our gospel of open source, we're trying to spread that as much as we can. We want to make this a very mainstream part of traditional news. If we're made obsolete, that's a good thing because we'd like for more traditional news outlets to be doing digital investigation and verification work."
Can playing video games really curb the risk of depression? Experts weigh in.
- A new study published by a UCL researcher has demonstrated how different types of screen time can positively (or negatively) influence young people's mental health.
- Young boys who played video games daily had lower depression scores at age 14 compared to those who played less than once per month or never.
- The study also noted that more frequent video game use was consistently associated with fewer depressive symptoms in boys with lower physical activity, but not in those with high physical activity levels.
A new study published by a UCL researcher has demonstrated how different types of screen time can positively (or negatively) influence young people's mental health. The study suggests that boys who play video games frequently in early adolescence (around age 11) are less likely to develop depressive symptoms throughout the following years. Additional findings in this study suggest that girls who spend more time on social media appear to develop more depressive symptoms.
How do video games and social media impact young kids?
The study gained interesting insight into the link between depression rates at age 14 and video game usage a few years earlier.
Credit: Pixel-Shot on Adobe Stock
The study's lead author, Ph.D. student Aaron Kandola, explains to Eurekalert: "Screens allow us to engage in a wide range of activities. Guidelines and recommendations about screen time should be based on our understanding of how these different activities might influence mental health and whether that influence is meaningful."
How this study was conducted:
- These findings come as part of the Millennium Cohort Study, where over 11,000 (n = 11,341) adolescents were surveyed.
- Depressive symptoms were measured with a Moods and Feelings Questionnaire (age 14).
- "Exposures" were listed as the frequency of video games, social media, and internet usage (age 11).
- Physical activity was also accounted for on a self-reporting basis.
When comparing young boys (age 11) who played video games to those who don't, the study showed interesting results:
- Boys who played video games daily had 24.3 percent lower depression scores at age 14 (compared to those who played less than once per month or never).
- Boys who played video games at least once per week had 25.1 percent lower depression scores at age 14 (compared to those who played less than once per month or never).
- BOoys who played video games at least once per month had 31.2 percent lower depression scored at age 14 (compared to those who played less than once per month or never).
When comparing how depression impacted young girls based on their social media usage, the researchers found that:
- Compared with less than once per month/never social media usage, using social media most days at age 11 was associated with a 13% higher depression score at age 14.
Can playing video games actually be beneficial?
There has been a lot of speculation in the past two decades about screen-time, social media, and video games. Whether it's linking video games to violence and obesity or linking social media to depression and anxiety — this seems to be a controversial question. According to the research, the answer to this question is yes, video games can be beneficial in moderation when paired with physical activity and real-life application.
Adding in some physical activity could be the difference between beneficial and harmful.
The above-mentioned study also noted that more frequent video game use was consistently associated with fewer depressive symptoms in boys with lower physical activity, but not in those with high physical activity levels.
Previous studies have concluded there are some mental health benefits to playing video games.
A 2020 study by the University of Oxford analyzed the impacts of playing two extremely popular games at the time: Nintendo's "Animal Crossing: New Horizons" and Electronic Arts' "Plants vs. Zombies: Battle for Neighborville." The study used data and survey responses from over 3000 players in total — the games' developers shared anonymous data about people's playing habits, and the researchers surveyed those gamers separately about their well-being.
Results of this study found that time spent playing these games was associated with players reporting that they felt happier.
Additionally, previous studies (such as this University of Arizona study) have linked video game usage with new learning opportunities: "
Games like Minecraft are being used in more and more classrooms around the country. MinecraftEdu (recently purchased by Microsoft), allows teachers to structure a sandbox-style play environment around any curriculum. Students can work together to learn the scientific method, build farms, or take advantage of turtle robots to learn basic programming. Not only do these activities improve team-building skills, but they give students the chance to develop and practice technological literacy."
"Everything in moderation" is an important factor in determining whether video game use is beneficial or harmful.
While there can be some positive impacts from playing video games, research (such as this study conducted in 2013) has also shown that people who spend a predominant part of their day gaming are at risk of showing lower educational and career attainment in addition to problems with peers and lower social skills.
In the future, you might voluntarily share your social media data with your psychiatrist to inform a more accurate diagnosis.
- About one in five people suffer from a psychiatric disorder, and many go years without treatment, if they receive it at all.
- In a new study, researchers developed machine-learning algorithms that analyzed the relationship between psychiatric disorders and Facebook messages.
- The algorithms were able to correctly predict the diagnosis of psychiatric disorders with statistical accuracy, suggesting digital tools may someday help clinicians identify mental illnesses in early stages.
For the 20 percent of people with a mental illness, early identification of the condition is key to getting the best treatment. But people often suffer symptoms for months, even years, without receiving clinical attention. Part of the problem is that psychiatrists have few tools to identify mental illnesses; they rely mostly on self-reported data and observations from friends and family.
The field is, in some ways, "stuck in the prehistoric age," according to Michael Birnbaum, MD, an assistant professor at the Feinstein Institutes for Medical Research and an attending physician at Zucker Hillside Hospital and Lenox Hill Hospital at Northwell Health.
But digital tools could help bring psychiatry into the modern age.
"It became apparent, in my work with young folks, that social media was ubiquitous," Dr. Birnbaum told Big Think. "So, we started to think about ways that we could potentially explore the utility of the internet and social media in the way we diagnose our patients and the care that we provide."
The results of a recent study, conducted by Feinstein Institutes researchers and IBM Research, suggest that social media activity can provide useful insights into who's at risk of developing mental illnesses like mood disorders and schizophrenia spectrum disorders.
Published in the journal njp Schizophrenia, the study used machine-learning algorithms to analyze millions of Facebook messages and images, which were provided voluntarily by participants, ages 15 to 35. The data represented participants' Facebook activity for 18 months prior to hospitalization.
...the health disparity between people with mental illness and those without is larger than disparities attributable to race, ethnicity, geography or socioeconomic status.
Identifying psychiatric disorders
The goal was for the algorithms to analyze patterns in these datasets, then predict which group participants belonged to: schizophrenia spectrum disorders (SSD), mood disorders (MD), or healthy volunteers (HV). The results were promising, showing that the algorithms correctly identified:
- The SDD group with an accuracy of 52% (chance was 33%)
- The MD group with an accuracy of 57% (chance was 37%)
- The HV group with an accuracy of 56% (chance was 29%)
The study also showed interesting differences in Facebook activity among the groups, such as:
- The SSD group was more likely to use language related to perception (hear, see, feel).
- The MD and SSD groups were far more likely to use swear words and anger-related language.
- The MD group was more likely to use language related to biological processes (blood, pain).
- The SSD group was more likely to express negative emotions, use second-person pronouns and write in netspeak (lol, btw, thx).
- The MD group was more likely to post photos containing more blues and less yellows.
These differences tended to become more apparent in the months before a patient was hospitalized. But even 18 months before hospitalization, the results revealed signals that hinted participants might be on the path to developing a psychiatric disorder. That's where these tools may someday help improve early-identification efforts.
"In psychiatry, we often get a snapshot of somebody's life, for 30 minutes once a month or so," he said. "There's the potential to get much greater granularity with some of these new assessment tools. Facebook, for example, can allow us to understand somebody's thoughts and behaviors in a more real-time, longitudinal fashion, as opposed to cross-sectional moments in time."
Dr. Birnbaum noted that everyone has a unique style of online behavior and that certain behavioral changes may contain clues about mental health.
"The way that we're understanding this is that everybody has a digital baseline, a way they typically act and behave on social media and the internet," he said. "So, ultimately here we would want to identify this baseline for each individual—a fingerprint—and then monitor for changes over time, and identify which changes are concerning, and which are not."
Using digital tools to better identify psychiatric conditions could someday reduce the number of people who suffer without treatment.
"There's an alarming gap between the number of people who experience mental illness and those who receive care," said Michael Dowling, president and CEO of Northwell Health. "It's especially troubling when you consider that the health disparity between people with mental illness and those without is larger than disparities attributable to race, ethnicity, geography or socioeconomic status."
A step toward the future of psychiatry
Credit: Jewel Samad/AFP via Getty Images
Although previous research has examined the relationship between online activity and psychiatric disorders, the new study is unique because it paired online behavior with clinically confirmed cases of psychiatric disorders.
"The vast majority of the data thus far has been extracted from anonymous, or semi-anonymous individuals online, without any real way to validate the diagnosis or confirm the authenticity of the symptoms," Dr. Birnbaum said.
But before clinicians can use these kinds of digital approaches, researchers have more work to do.
"I think that we need much larger datasets," Dr. Birnbaum said. "We need to repeat these findings. We need to better understand how demographic differences, like age, ethnicity and gender, can play a role."
Privacy is another consideration. Dr. Birnbaum emphasized that these kinds of approaches would only be conducted on a voluntary basis, and that the Facebook data used in the recent study was anonymized, and the algorithms examined only individual words, not the context or meaning of sentences.
"This isn't about surveillance, or that Facebook should somehow be monitoring us," Dr. Birnbaum said. "It's about giving the power to the patient. I imagine a world where patients could come into the doctor's office and express their concerns, but also provide some additional clinically meaningful information that they own."
Dr. Birnbaum said the long-term goal isn't for algorithms to make official diagnoses or replace physicians, but rather to serve as supplementary tools. He added that these tools would be used only for people seeking help or information about their risk of developing a psychiatric condition, or suffering a relapse.
"Hopefully one day, we'll be able to incorporate this and other information to inform what we do, the same way you go to a doctor and you get an X-ray or a blood test to inform the diagnosis," he said. "It doesn't make the diagnosis, but it informs the doctor. That is where psychiatry is heading, and hopefully this is a step in that direction."