Big ideas.
Once a week.
Subscribe to our weekly newsletter.
What people smuggle onto airplanes — and why
Most of those who try to sneak stuff onboard succeed.
- 32.4 percent of American travelers try to sneak forbidden items onboard.
- 87.7 percent of them succeed.
- It's mostly about recreational drugs, but also about explosives, poisons, and infectious items.
If you travel by air these days, odds are your opinion of the Transportation Security Administration (TSA) agents you meet on security lines aren't exactly neutral. They're there to make us feel more secure on airplanes, and maybe even be more secure. But while they comfort some of us, they aggravate others, removing our shoes and exposing us to invasive backscatter X-rays while exhibiting all the warmth and compassion a $19.31-per-hour job elicits. (That's the average; TSA agents start at $16.) Of course, in spring 2018 alone, the TSA screened over 72 million passengers, so that's a lot of trays, bags, shoes, and semi-nude body pix to go through. There's also some skepticism that we're really all that much more secure for all of this.
Stratos Jet Charters recently conducted a survey of people who've tried to sneak illicit materials past the TSA, after first ascertaining that about 32.4% of us have. The company surveyed 1,001 people about what they attempted to smuggle onboard and why. By the way, 87.7% of them were successful—not a ringing endorsement for our friends at the TSA. Stratos Jet Charters compiled the results of its survey as a series of often-disturbing visualizations called Protected: Sky-High Smuggling.
All infographics in this article are by Stratos Jet Charters
What’s being smuggled, and why? Mostly drugs, because.

Far and away, drugs make up the lion's share of the stuff people surreptitiously get onto airplanes—counting grass, its cousins, and illegal prescription drugs, we're talking 48% of what women secretly travel with, and 55.8% of what men smuggle aboard. For women, it's about half and half, but for men, it's very much mostly marijuana.
Next up is weapons and ammo, at a much lower 8.5% for females and 15.2% for males. Next up is 140-proof-plus alcohol, really strong stuff, and hopefully not brought aloft by the same people who bring guns.
Even more disturbing is that people are bringing "poisonous or infectious materials" into a packed aircraft. While a popular trope for TV shows—see season 1 of Fringe—this sounds truly scary for real life. Checking the TSA's prohibited items list, though, reveals the kind of things one might find in this category. It's mostly pretty obvious and not 12 Monkeys stuff.
Regardless of the item, the taboo carry-ons are mostly sneaked in because these passengers didn't want to do without them during their travel. Some consider such an item a memento of their trip, either because it's not legal at home or they see it as a souvenir. And it seems 6.4% are drug mules or gun runners...just saying.
How we’re sneaking things onboard
There are a range of ways people get their contraband into the cabin. Women are more likely to pack it into their checked bags, while men—brazen souls—are more likely to secretly stuff it in carry-ons. Apparently the TSA has reason to check our shoes, too. (We don't think we want to know what "OTHER" means.)
Getting high up high
So, the big ticket item is marijuana, and primarily for personal use, by a long shot. A lot of this traffic has to do with inconsistencies in grass' legal status from place to place.
By a small margin, most of the weed making it into the skies, 88.5%, is in edible form, especially among those who haven't gotten busted. Baggie toters also generally get away with it—they represent 68.8% of the successful travelers.
The people who most often got stopped were—duh—those carrying blunts. We assume this includes critters sufficiently crispy to float up to the gate with visible spliffs.
Getting even higher in the skies
Some are inclined to bring other illegal, often harder, drugs on their trips. Heroin, cocaine, and opium top the list. Next up is psychedelic LSD, followed by mind-alterers such as Khat, MDMA, peyote, and mushrooms.
Unprescribed prescription drugs that are most often smuggled are Benzodiazepines, presumably to smooth out travel jitters, and sleep aids, to knock a passenger out altogether.
Some work, mostly play
Most of this DIY smuggling is taking place during personal travel—it's risky enough without jeopardizing one's job. Of course, if it's related to your work...
The only category of items that breaks 20% on business trips is unauthorized weapons and ammunition, so, um. The people in the survey traveled a lot more, 69%, for personal reasons than for business purposes, 31%, anyway.
As you hit your next security line or board your next aircraft, there's some solace in knowing that most of us don't deliberately smuggle contraband onboard. While 32.4% is a sizable number of us that do, there's some solace in knowing that 67.6 of those in line with you don't.
Study: Unattractive people far overestimate their looks
The finding is remarkably similar to the Dunning-Kruger effect, which describes how incompetent people tend to overestimate their own competency.
- Recent studies asked participants to rate the attractiveness of themselves and other participants, who were strangers.
- The studies kept yielding the same finding: unattractive people overestimate their attractiveness, while attractive people underrate their looks.
- Why this happens is unclear, but it doesn't seem to be due to a general inability to judge attractiveness.
There's no shortage of disparities between attractive and unattractive people. Studies show that the best-looking among us tend to have an easier time making money, receiving help, avoiding punishment, and being perceived as competent. (Sure, research also suggests beautiful people have shorter relationships, but they also have more sexual partners, and more options for romantic relationships. So call it a wash.)
Now, new research reveals another disparity: Unattractive people seem less able to accurately judge their own attractiveness, and they tend to overestimate their looks. In contrast, beautiful people tend to rate themselves more accurately. If anything, they underestimate their attractiveness.
The research, published in the Scandinavian Journal of Psychology, involved six studies that asked participants to rate the attractiveness of themselves and other participants, who were strangers. The studies also asked participants to predict how others might rate them.
In the first study, lead author Tobias Greitemeyer found that the participants who were most likely to overestimate their attractiveness were among the least attractive people in the study, based on average ratings.

Ratings of subjective attractiveness as a function of the participant's objective attractiveness (Study 1)
Greitemeyer
"Overall, unattractive participants judged themselves to be of about average attractiveness and they showed very little awareness that strangers do not share this view. In contrast, attractive participants had more insights into how attractive they actually are. [...] It thus appears that unattractive people maintain illusory self‐perceptions of their attractiveness, whereas attractive people's self‐views are more grounded in reality."
Why do unattractive people overestimate their attractiveness? Could it be because they want to maintain a positive self-image, so they delude themselves? After all, previous research has shown that people tend to discredit or "forget" negative social feedback, which seems to help protect a sense of self-worth.

NBC
To find out, Greitemeyer conducted a study that aimed to put participants in a positive, non-defensive mindset before rating attractiveness. He did that by asking participants questions that affirmed parts of their personality that had nothing to do with physical appearance, such as: "Have you ever been generous and selfless to another person?" Yet, this didn't change how participants rated themselves, suggesting that unattractive people aren't overestimating their looks out of defensiveness.
The studies kept yielding the same finding: unattractive people overestimate their attractiveness. Does that bias sound familiar? If so, you might be thinking of the Dunning-Kruger effect, which describes how incompetent people tend to overestimate their own competency. Why? Because they lack the metacognitive skills needed to discern their own shortcomings.
Greitemeyer found that unattractive people were worse at differentiating between attractive and unattractive people. But the finding that unattractive people may have different beauty ideals (or, more plainly, weaker ability to judge attractiveness) did "not have an impact on how they perceive themselves."
In short, it remains a mystery exactly why unattractive people overestimate their looks. Greitemeyer concluded that, while most people are decent at judging the attractiveness of others, "it appears that those who are unattractive do not know that they are unattractive."
Unattractive people aren't completely unaware
The results of one study suggested that unattractive people aren't completely in the dark about their looks. In the study, unattractive people were shown a set of photos of highly attractive and unattractive people, and they were asked to select photos of people with comparable attractiveness. Most unattractive people chose to compare themselves with similarly unattractive people.
"The finding that unattractive participants selected unattractive stimulus persons with whom they would compare their attractiveness to suggests that they may have an inkling that they are less attractive than they want it to be," Greitemeyer wrote.
Helmet worn at home shrank man's brain tumor by a third
The new brain tumor treatment targets a cancer that kills 75% of patients within a year.
This article was originally published on our sister site, Freethink.
A new brain tumor treatment appeared to shrink a man's aggressive glioblastoma tumor by nearly a third — and all he had to do was wear a noninvasive helmet at home.
The challenge: Glioblastoma is a rare but aggressive type of brain cancer that is almost always fatal in adults — 75% of patients die within a year of diagnosis, and only 5% live more than five years.
Treatment usually starts with risky surgery to remove the bulk of the brain tumor, after which a patient might undergo chemo or radiation therapy.
"Our results…open a new world of non-invasive and nontoxic therapy for brain cancer."
DAVID S. BASKIN
Not only can the side effects of those treatments hurt a patient's quality of life, but the treatments themselves can't actually cure the brain cancer — they just buy the patient a little more time.
Why it matters: Survival rates for glioblastoma have remained mostly stagnant over the past few decades, meaning our ability to treat the deadly brain cancer isn't getting much better.
If that doesn't change, we'll continue to lose about 200,000 people to the disease every year, worldwide.
New brain tumor treatment: In a past study, researchers at Houston Methodist Neurological Institute found they could kill glioblastoma cells in the lab by subjecting them to oscillating magnetic fields, which they created by using electricity to rotate magnets in a precise way.
They believe the fields disrupt the transportation of electrons during the process used to create energy for cells. However, compounds produced by tumor cells are needed to trigger this disruption, meaning healthy cells should be spared while glioblastoma cells die.
The case study: In 2019, the researchers received approval under the FDA's compassionate use protocol to test the therapy on a man whose brain tumor wasn't responding to aggressive cancer treatments.
"Imagine treating brain cancer without radiation therapy or chemotherapy."
DAVID S. BASKIN
Over the course of three days, they trained the man and his wife how to deliver the therapy using a helmet equipped with three rotating magnets.
They then sent him home with the helmet and instructions to administer the brain tumor treatment for two hours every day at first and then work his way up to six hours.
The results: The man used the helmet for 36 days before suffering an unrelated head injury that led to his death. His family gave the researchers permission to autopsy his brain, and they found that his tumor had shrunk by 31% since the start of study.
"Thanks to the courage of this patient and his family, we were able to test and verify the potential effectiveness of the first noninvasive therapy for glioblastoma in the world," corresponding author David S. Baskin said in a press release.
Looking ahead: While this study is encouraging, the researchers will need to prove their brain tumor treatment can help more than a single patient.
The unlucky head injury also means we don't know if shrinking the tumor in the short-run improves survival rates. But if it can, the helmet could mark a turning point in the battle against glioblastoma.
"Imagine treating brain cancer without radiation therapy or chemotherapy," Baskin said. "Our results in the laboratory and with this patient open a new world of non-invasive and nontoxic therapy for brain cancer, with many exciting possibilities for the future."
Robots may be more like animals than humans
Meet MIT's Kate Darling, a robot ethicist who says that we should rethink our relationship with robots.
This article was originally published on our sister site, Freethink.
We're nearly a quarter into the 21st century, and by now, the Terminator-style portrayal of robots taking over the world has become a tired cliche. While it's seductive, most of us are aware that this isn't (likely) the future of intelligent life. But what will that look like, then?
According to Kate Darling, a robot ethicist at MIT and author of "The New Breed: What Our History With Animals Reveals About Our Future with Robots," the answer is right in front of us: animals.
While we have traditionally viewed robots as human-like, Darling believes the more apt comparison is seeing them as a different kind of "animal."
When we expect a robot to behave like a human, it's a very disappointing experience.
KATE DARLING
Robots will increasingly occupy shared spaces with humans, social robots will take off, and the questions around how humans should treat and interact with robots has never been more critical, Darling argues.

Her point isn't that robots and animals are the same or that they should be used exactly the same way, but that we should be open to the different ways we can collaborate with robots, harnessing their diverse range of skills and abilities — as we do with animals.
I spoke to Darling about how a robot's design affects our interaction with it, why we should stop worrying about robots replacing humans, and more. Here is our conversation, edited and condensed for clarity.
Why have robots traditionally been designed to look like humans? What is the thinking behind that?
We've always been fascinated with recreating ourselves. We had automata back in ancient times that were recreations of human bodies that could move around. Even the earliest artificial intelligence researchers started out with a goal of recreating human intelligence.
With robots and AI, in particular, they are machines that can sense and think and make autonomous decisions and learn. So we tend to automatically compare them to ourselves as well because of our inherent tendency to compare everything to ourselves. And traditionally, a lot of robots have been human-shaped — even though that's not necessarily the most practical form.
What are the problems with this human-like design?
So there's this subconscious comparison of robots to humans that has been enhanced by the design. I think it doesn't make sense. First of all, AI is not like human intelligence — robots don't have the same skills as people. So oftentimes when we expect a robot to behave like a human, it's a very disappointing experience. That's not to say that robots and AI aren't smart, just that they have a very different type of intelligence and skill than people do.
Our question shouldn't be, 'at what point can we recreate human ability and human skill in a robot?' The question is, 'why would we want to do that in the first place when we can create something different?'
KATE DARLING
Also, this comparison really limits us. The early AI researchers were trying to recreate a human brain and human intelligence, but that's not where we've ended up. And so our question shouldn't be, "at what point can we recreate human ability and human skill in a robot?" The question is, "why would we want to do that in the first place when we can create something different?" Robots and AI don't think or behave like us, but they are very useful and very smart.
Instead, you suggest using animals as a way to think about robots. What are the parallels here?
There are so many fun parallels. For thousands of years, we've used animals as a supplement to human ability. Not because they do what we do, but because their skill sets are so different from ours.
We used oxen to plow our fields, we've used horses to let us travel around in new ways. In some ways, a horse-drawn carriage is the original semi-autonomous vehicle. We've used pigeons to carry mail or deliver medicine in ways similar to how we're using drones today. We used them to take aerial photographs. So they were the original hobby photography drone. We've used dolphins in the Navy to detect mines underwater or locate lost underwater equipment, which is a similar function to how we're starting to use underwater robots today.
But animals have feelings, and robots don't. How does this affect the way we do, or should, treat robots?
Right. So this is something that has always really fascinated me about human-robot interaction. What it actually says about how we treat other entities. Because in many cases we have not treated animals very well in partnering with them. And in fact, in Western society, we're often quite hypocritical about how we think about how we want to treat other beings and how we actually treat them.
So a lot of us think that we care about whether other beings feel or whether they have intelligence or whether they can suffer, but if you look at the history of animal rights in Western society, it quickly becomes apparent that we have only protected the animals that are cute or that we care about culturally, or that we have some emotional relationship to.
What's so interesting about human-robot interaction research is it's showing that we treat robots in very similar ways, where we treat some of them that we have no emotional connection to as tools and products, and then others we treat as companions or develop emotional attachments to.
So it's entirely possible that if we don't stop and think about this, that we may default to caring more about a robot that feels nothing than we might about a slimy slug in our backyard. It's actually a unique moment in time where we could stop and think and maybe nudge our behavior in a way that's more consistent with what we feel our values are.
It's interesting you say that because I was thinking the opposite — that we might treat animals kindly, but we sometimes treat robots (especially social ones) with detachment. And there can be harmful effects of this. For instance, if we talk "down" to an Amazon Alexa, it has implications for the way we might treat women in our lives.
So I do think there actually is possibly an argument for treating technology with some kindness, as ridiculous as that sounds. Even though the technology can't feel and we're not anywhere close to having sentient robots or robot consciousness.
We're seeing robots being designed in a very lifelike way, certain robots, and robots that can respond to being kicked, for example, with a simulation of pain. And one question is, even though the robot can't feel, should we let people kick them?
KATE DARLING
But there's questions around our own behavior. So if you get used to barking commands at Alexa, or your kid gets used to barking commands at Alexa, you could get used to barking commands at women or women named Alexa or other people. Parents have raised enough concern about this that a lot of these home voice assistant companies have released features to turn on a magic word feature so that it makes Alexa only respond if you say "please" and "thank you," for example.
But then you get into all sorts of questions with different designs of robots. Increasingly we're seeing robots being designed in a very lifelike way, certain robots, and robots that can respond to being kicked, for example, with a simulation of pain. And one question is, even though the robot can't feel, should we let people kick them?
And what if we had a real-life Westworld theme park, where people could go and do anything they want to life-like robots? Is that a healthy outlet for violent behavior or (does it) train people's cruelty muscles? I don't have an answer to the question, but it is a question that is going to be raised very soon.
Right. So on the flip side, maybe we could go in the other direction and make them seem less lifelike at all and more like just a neutral object that we don't associate with any kind of life?
We can try. What we're also seeing in the research is that it's really hard to turn off this tendency that we have to treat robots like living things. Even something as simple as the Roomba vacuum cleaner — just because it's moving around on its own, people will name the Roomba. People will feel bad for the Roomba when it gets stuck. So it's a very difficult human tendency to counteract.
And in fact, a lot of animal researchers and nature researchers have moved away from the idea that we have to get rid of how we project ourselves onto animals and have said, "Okay, this is something that is there, we just need to be very aware of it, and we can nudge our behavior in certain directions, but we're not going to get rid of the tendency entirely."
And maybe that's a good thing because it means that we can relate to animals in certain ways that might be actually beneficial for humans. So having therapy dogs or having pets as companionship can actually be a very positive thing for people.
Or military robots, where soldiers are becoming emotionally attached to the bomb disposal units that they work with. Which, at first blush, you would say, "Okay, that's terrible. We don't want soldiers to be risking their lives or behaving in an inefficient way on a battlefield because they've developed an emotional connection to a robot."
But at the same time, if you look at the history of the role that animals have played in war, yes, soldiers sometimes made bad decisions based on wanting to save their dog or their horse on the battlefield. But the animals brought so much emotional comfort to soldiers in very stressful situations that it's not clear to me that it's necessarily a bad thing, even if we could prevent it.
Many people fear that robots are going to threaten us in some way, or replace us. How does shifting it to a view of an animal change the way that we look at that issue?
Particularly in Western society, we have this idea that there's this constant threat of robots rising up against us or coming to replace us. And in part that comes from this comparison of robots to humans — and it's very limiting. It influences a lot of our conversations from what intelligence is, to our conversations about robots and jobs and robots replacing people one-to-one.
Using the animal analogy helps us step away from this fear of being replaced. And the animals obviously have not done that. Animals have disrupted society. They have created completely different workplaces for people. They have revolutionized farming and transportation and all sorts of things that technology is also going to disrupt — but it's not the same type of fear that we've had with animals, about animals rising up against us.
Fear with robots is also quite misplaced, given that we're not anywhere close to having artificial superintelligence or any type of science-fictional scenario that gets a lot of attention in the press — it's actually the wrong question to be worried about.
What's been the driving force behind your research? The question you are most interested in?
The thing that blows my mind is our tendency to treat robots like they're alive, even if we know perfectly well that they're just machines. Just a few weeks ago I got this baby harp seal robot called a PARO. It's a medical device that's used with dementia patients in a nursing home. And it looks like a very cute baby seal. It doesn't do very much, it just kind of responds to touch. Makes these little movements and little sounds. I was showing it off to the group of roboticists that I work with. They create social robots — they specifically design robots that give off cues like this.
They were all like, "Oh, it's so cute. Oh, look, it's doing XYZ!" So even the people who build the programs are not immune. In fact, still very susceptible to being swayed by these artificial cues that we've programmed into these machines. It seems to be such a deep biological tendency that we have. It always surprises me, even though I've seen it happen and there's so much research on it.
I think that we're not talking about it enough and not acknowledging enough of the incredible social tendency that we have that is going to impact how we integrate these machines because we treat them so differently than other devices.
One man visited all 2964 bus stops in San Francisco — for science
Americans don't like to ride the bus. There are ways to fix that.