Once a week.
Subscribe to our weekly newsletter.
Spiders lace webs in toxins to paralyze prey
Just what every arachnophobe needed to hear.
- A new study suggests some spiders might lace their webs with neurotoxins similar to the ones in their venom.
- The toxins were shown to be effective at paralyzing insects injected with them.
- Previous studies showed that other spiders lace their webs with chemicals that repel large insects.
Everybody knows how spiders catch bugs to eat. They weave a sticky web and wait for something to land in it. These webs are remarkably tough, elastic, and have been the focus of engineers hoping to replicate their properties for years. It all seems rather straightforward, as trap setting goes.
But in a twist that will send a chill down the spine of arachnophobes, a new study suggests that some spiderwebs assure their prey won't get away by adding neurotoxins to their webs.
Just what we needed to know before walking into another spider web
The study, published in the Journal of Proteome Research, was carried out by Biochemical Ecologist Mario Palma of the University of São Paulo State, their Ph.D. student, Franciele Esteves, and their colleagues. They focused on the webs of the striking T. clavipes, also known as the Banana Spider.
These spiders are orb weavers, known for their complex and often large webs. They can have up to seven glands that produce silk for various purposes, including catching prey, shielding themselves, protecting their eggs, mating rituals, and making webbing to walk on.
The researchers examined the spiders' various web producing glands. This revealed a spectrum of neurotoxin-like proteins not dissimilar to those found in the spider's venom present on the silk. On the web, these proteins are suspended in oily, fatty acids.
Following up on this discovery, they tested the proteins' effectiveness on insects. Most of those test subjects were paralyzed less than a minute after exposure, and a few died. These experiences relied on the injection of the proteins rather than on absorption but did demonstrate their capacity. Further tests showed that the fatty acids the proteins reside in could allow them to enter the body of prey insects.
Previous studies demonstrated that some spiders can add certain chemicals to their webs to repel larger insects which could cause the spider trouble. So, the idea that some spiders are adding another chemical to the mix, this time to cause paralysis, isn't too far-fetched.
However, some scientists aren't so sure about all this. They call for further study into the mechanism of action to demonstrate that these proteins cause paralysis and rule out potential other applications.
So, those of you who like animal facts can take pride in knowing that spider webs sometimes have poison in them to stun their prey. Those of you who are terrified of spiders can fear the same information. Either way, walking into a spider web just got even less pleasant.
- How the Almost-Blind Spider Sees the World By Playing Its Web ... ›
- What animals do humans fear most? - Big Think ›
- Should I kill spiders in my home? An entomologist explains why not to ›
A cave in France contains man’s earliest-known structures that had to be built by Neanderthals who were believed to be incapable of such things.
In a French cave deep underground, scientists have discovered what appear to be 176,000-year-old man-made structures. That's 150,000 years earlier than any that have been discovered anywhere before. And they could only have been built by Neanderthals, people who were never before considered capable of such a thing.
This is going to force a major shift in the way we see these early hominids. Researchers had thought that Neanderthals were profoundly primitive, and just barely human. This cave in France's Aveyron Valley changes all that: It's suddenly obvious that Neanderthals were not quite so unlike us.
According to The Atlantic, Bruniquel Cave was first explored in 1990 by Bruno Kowalsczewski, who was 15 at the time. He'd spent three years digging away at rubble covering a space through which his father felt air moving.
Some members of a local caving club managed to squeeze through the narrow, 30-meter long tunnel Kowalsczewski had dug to arrive in a passageway. They followed it past pools of water and old animal bones for over 330 meters before coming into a large chamber and a scene they had no reason to expect: Stalagmites that someone had broken into hundreds of small pieces, most of which were arranged into two rings—one roughly 6 meters across, and one 2 meters wide—with the remaining pieces stacked into one of four piles or leaning against the rings. There were also indications of fires and burnt bones.
Image source: Etienne FABRE - SSAC
A professional archeologist, Francois Rouzaud, determined with carbon dating that a burnt bear bone found in the chamber was 47,600 years old, which made the stalagmite structures older than any known cave painting. It also put the cave squarely within the age of the Neanderthals since they were the only humans in France that early. No one had suspected them of being capable of constructing complex forms or doing anything that far underground.
After Rouzard suddenly died in 1999, exploration at the cave stopped until life-long caver Sophie Verheyden, vacationing in the area, heard about it and decided to try and uranium-date the stalagmites inside.
The team she assembled eventually determined that the stalagmites had been broken up by people 176,000 years ago, way farther back even than Rouzard had supposed.
There weren't any signs that Neanderthals lived in the cave, so it's a mystery what they were up to down there. Verheyden thinks it's unlikely that a solitary artist created the tableaux, and so an organized group of skilled workers must've been involved. And “When you see such a structure so far into the cave, you think of something cultural or religious, but that's not proven," Verheyden told The Atlantic.
Whatever they built, the Bruniquel Cave reveals some big surprises about Neanderthals: They had fire, they built things, and likely used tools. Add this to recent discoveries that suggest they buried their dead, made art, and maybe even had language, and these mysterious proto-humans start looking a lot more familiar. A lot more like homo sapiens, and a lot more like distant cousins lost to history.
Counterintuitively, directly combating misinformation online can spread it further. A different approach is needed.
- Like the coronavirus, engaging with misinformation can inadvertently cause it to spread.
- Social media has a business model based on getting users to spend increasing amounts of time on their platforms, which is why they are hesitant to remove engaging content.
- The best way to fight online misinformation is to drown it out with the truth.
A year ago, the Center for Countering Digital Hate warned of the parallel pandemics — the biological contagion of COVID-19 and the social contagion of misinformation, aiding the spread of the disease. Since the outbreak of COVID-19, anti-vaccine accounts have gained 10 million new social media followers, while we have witnessed arson attacks against 5G masts, hospital staff abused for treating COVID patients, and conspiracists addressing crowds of thousands.
Many have refused to follow guidance issued to control the spread of the virus, motivated by beliefs in falsehoods about its origins and effects. The reluctance we see in some to get the COVID vaccine is greater amongst those who rely on social media rather than traditional media for their information. In a pandemic, lies cost lives, and it has felt like a new conspiracy theory has sprung up online every day.
How we, as social media users, behave in response to misinformation can either enable or prevent it from being seen and believed by more people.
The rules are different online
Credit: Pool via Getty Images
If a colleague mentions in the office that Bill Gates planned the pandemic, or a friend at dinner tells the table that the COVID vaccine could make them infertile, the right thing to do is often to challenge their claims. We don't want anyone to be left believing these falsehoods.
But digital is different. The rules of physics online are not the same as they are in the offline world. We need new solutions for the problems we face online.
Now, imagine that in order to reply to your friend, you must first hand him a megaphone so that everyone within a five-block radius can hear what he has to say. It would do more damage than good, but this is essentially what we do when we engage with misinformation online.
Think about misinformation as being like the coronavirus — when we engage with it, we help to spread it to everyone else with whom we come into contact. If a public figure with a large following responds to a post containing misinformation, they ensure the post is seen by hundreds of thousands or even millions of people with one click. Social media algorithms also push content into more users' newsfeeds if it appears to be engaging, so lots of interactions from users with relatively small followings can still have unintended negative consequences.
The trend of people celebrating and posting photos of themselves or loved ones receiving the vaccine has been far more effective than any attempt to disprove a baseless claim about Bill Gates or 5G mobile technology.
Additionally, whereas we know our friend from the office or dinner, most of the misinformation we see online will come from strangers. They often will be from one of two groups — true believers, whose minds are made up, and professional propagandists, who profit from building large audiences online and selling them products (including false cures). Both of these groups use trolling tactics, that is, seeking to trigger people to respond in anger, thus helping them reach new audiences and thereby gaming the algorithm.
On the day the COVID vaccine was approved in the UK, anti-vaccine activists were able to provoke pro-vaccine voices into posting about thalidomide, exposing new audiences to a reason to distrust the medical establishment. Those who spread misinformation understand the rules of the game online; it's time those of us on the side of enlightenment values of truth and science did too.
How to fight online misinformation
Of course, it is much easier for social media companies to take on this issue than for us citizens. Research from the Center for Countering Digital Hate and Anti-Vax Watch last month found that 65% of anti-vaccine content on social media is linked to just twelve individuals and their organizations. Were the platforms to simply remove the accounts of these superspreaders, it would do a huge amount to reduce harmful misinformation.
The problem is that social media platforms are resistant to do so. These businesses have been built by constantly increasing the amount of time users spend on their platforms. Getting rid of the creators of engaging content that has millions of people hooked is antithetical to the business model. It will require intervention from governments to force tech companies to finally protect their users and society as a whole.
So, what can the rest of us do, while we await state regulation?
Instead of engaging, we should be outweighing the bad with the good. Every time you see a piece of harmful misinformation, share advice or information from a trusted source, like the WHO or BBC, on the same subject. The trend of people celebrating and posting photos of themselves or loved ones receiving the vaccine has been far more effective than any attempt to disprove a baseless claim about Bill Gates or 5G mobile technology. In the attention economy that governs tech platforms, drowning out is a better strategy than rebuttal.
Imran Ahmed is CEO of the Center for Countering Digital Hate.
Because of our ability to think about thinking, "the gap between ape and man is immeasurably greater than the one between amoeba and ape."
- Self-awareness — namely, our capacity to think about our thoughts — is central to how we perceive the world.
- Without self-awareness, education, literature, and other human endeavors would not be possible.
- Striving toward greater self-awareness is the spiritual goal of many religions and philosophies.
The following is an excerpt from Dr. Stephen Fleming's forthcoming book Know Thyself. It is reprinted with permission from the author.
I now run a neuroscience lab dedicated to the study of self-awareness at University College London. My team is one of several working within the Wellcome Centre for Human Neuroimaging, located in an elegant town house in Queen Square in London. The basement of our building houses large machines for brain imaging, and each group in the Centre uses this technology to study how different aspects of the mind and brain work: how we see, hear, remember, speak, make decisions, and so on. The students and postdocs in my lab focus on the brain's capacity for self-awareness. I find it a remarkable fact that something unique about our biology has allowed the human brain to turn its thoughts on itself.
Until quite recently, however, this all seemed like nonsense. As the nineteenth-century French philosopher Auguste Comte put it: "The thinking individual cannot cut himself in two — one of the parts reasoning, while the other is looking on. Since in this case the organ observed and the observing organ are identical, how could any observation be made?" In other words, how can the same brain turn its thoughts upon itself?
Comte's argument chimed with scientific thinking at the time. After the Enlightenment dawned on Europe, an increasingly popular view was that self-awareness was special and not something that could be studied using the tools of science. Western philosophers were instead using self-reflection as a philosophical tool, much as mathematicians use algebra in the pursuit of new mathematical truths. René Descartes relied on self-reflection in this way to reach his famous conclusion, "I think, therefore I am," noting along the way that "I know clearly that there is nothing that can be perceived by me more easily or more clearly than my own mind." Descartes proposed that a central soul was the seat of thought and reason, commanding our bodies to act on our behalf. The soul could not be split in two — it just was. Self-awareness was therefore mysterious and indefinable, and off-limits to science.
We now know that the premise of Comte's worry is false. The human brain is not a single, indivisible organ. Instead, the brain is made up of billions of small components — neurons — that each crackle with electrical activity and participate in a wiring diagram of mind-boggling complexity. Out of the interactions among these cells, our entire mental life — our thoughts and feelings, hopes and dreams — flickers in and out of existence. But rather than being a meaningless tangle of connections with no discernible structure, this wiring diagram also has a broader architecture that divides the brain into distinct regions, each engaged in specialized computations. Just as a map of a city need not include individual houses to be useful, we can obtain a rough overview of how different areas of the human brain are working together at the scale of regions rather than individual brain cells. Some areas of the cortex are closer to the inputs (such as the eyes) and others are further up the processing chain. For instance, some regions are primarily involved in seeing (the visual cortex, at the back of the brain), others in processing sounds (the auditory cortex), while others are involved in storing and retrieving memories (such as the hippocampus).
In a reply to Comte in 1865, the British philosopher John Stuart Mill anticipated the idea that self-awareness might also depend on the interaction of processes operating within a single brain and was thus a legitimate target of scientific study. Now, thanks to the advent of powerful brain imaging technologies such as functional magnetic resonance imaging (fMRI), we know that when we self-reflect, particular brain networks indeed crackle into life and that damage or disease to these same networks can lead to devastating impairments of self-awareness.
I often think that if we were not so thoroughly familiar with our own capacity for self-awareness, we would be gobsmacked that the brain is able to pull off this marvelous conjuring trick. Imagine for a moment that you are a scientist on a mission to study new life-forms found on a distant planet. Biologists back on Earth are clamoring to know what they're made of and what makes them tick. But no one suggests just asking them! And yet a Martian landing on Earth, after learning a bit of English or Spanish or French, could do just that. The Martians might be stunned to find that we can already tell them something about what it is like to remember, dream, laugh, cry, or feel elated or regretful — all by virtue of being self-aware.
I find it a remarkable fact that something unique about our biology has allowed the human brain to turn its thoughts on itself.
But self-awareness did not just evolve to allow us to tell each other (and potential Martian visitors) about our thoughts and feelings. Instead, being self-aware is central to how we experience the world. We not only perceive our surroundings; we can also reflect on the beauty of a sunset, wonder whether our vision is blurred, and ask whether our senses are being fooled by illusions or magic tricks. We not only make decisions about whether to take a new job or whom to marry; we can also reflect on whether we made a good or bad choice. We not only recall childhood memories; we can also question whether these memories might be mistaken.
Self-awareness also enables us to understand that other people have minds like ours. Being self-aware allows me to ask, "How does this seem to me?" and, equally importantly, "How will this seem to someone else?" Literary novels would become meaningless if we lost the ability to think about the minds of others and compare their experiences to our own. Without self-awareness, there would be no organized education. We would not know who needs to learn or whether we have the capacity to teach them. The writer Vladimir Nabokov elegantly captured this idea that self-awareness is a catalyst for human flourishing:
"Being aware of being aware of being. In other words, if I not only know that I am but also know that I know it, then I belong to the human species. All the rest follow s— the glory of thought, poetry, a vision of the universe. In that respect, the gap between ape and man is immeasurably greater than the one between amoeba and ape."
In light of these myriad benefits, it's not surprising that cultivating accurate self-awareness has long been considered a wise and noble goal. In Plato's dialogue Charmides, Socrates has just returned from fighting in the Peloponnesian War. On his way home, he asks a local boy, Charmides, if he has worked out the meaning of sophrosyne — the Greek word for temperance or moderation, and the essence of a life well lived. After a long debate, the boy's cousin Critias suggests that the key to sophrosyne is simple: self-awareness. Socrates sums up his argument: "Then the wise or temperate man, and he only, will know himself, and be able to examine what he knows or does not know…No other person will be able to do this."
Likewise, the ancient Greeks were urged to "know thyself" by a prominent inscription carved into the stone of the Temple of Delphi. For them, self-awareness was a work in progress and something to be striven toward. This view persisted into medieval religious traditions: for instance, the Italian priest and philosopher Saint Thomas Aquinas suggested that while God knows Himself by default, we need to put in time and effort to know our own minds. Aquinas and his monks spent long hours engaged in silent contemplation. They believed that only by participating in concerted self-reflection could they ascend toward the image of God.
A similar notion of striving toward self-awareness is seen in Eastern traditions such as Buddhism. The spiritual goal of enlightenment is to dissolve the ego, allowing more transparent and direct knowledge of our minds acting in the here and now. The founder of Chinese Taoism, Lao Tzu, captured this idea that gaining self-awareness is one of the highest pursuits when he wrote, "To know that one does not know is best; Not to know but to believe that one knows is a disease."
Today, there is a plethora of websites, blogs, and self-help books that encourage us to "find ourselves" and become more self-aware. The sentiment is well meant. But while we are often urged to have better self-awareness, little attention is paid to how self-awareness actually works. I find this odd. It would be strange to encourage people to fix their cars without knowing how the engine worked, or to go to the gym without knowing which muscles to exercise. This book aims to fill this gap. I don't pretend to give pithy advice or quotes to put on a poster. Instead, I aim to provide a guide to the building blocks of self-awareness, drawing on the latest research from psychology, computer science, and neuroscience. By understanding how self-awareness works, I aim to put us in a position to answer the Athenian call to use it better.