Big ideas.
Once a week.
Subscribe to our weekly newsletter.
I used to work at Cato, so lot of people have asked me about the ongoing battle for control of the institute. Here's what I think. What I think is that so far the rhetoric around the controversy illustrates Tyler Cowen's dumbifying principle: "Just imagine yourself pressing a button every time you tell the good vs. evil story, and by pressing that button you're lowering your IQ by ten points or more." I don't think Ed Crane and the Cato incumbents are especially good. I don't think the Kochs are especially evil.
It seems clear enough that the Kochs are trying to take over by stacking the board. I have no idea what they're up to, but judging from their board nominees and appointees, it doesn't look at all good. On the other hand, the hand-wringing over the new Koch-nominated board members--Ted Olson, Andrew Napolitano, Nancy Pfotenhauer, and Kevin Gentry--strikes me as overwrought. It's worth noting that David Koch has been on the Cato board for years, the whole time I was employed there and more, and I don't remember anyone once suggesting he was an ideological or strategic danger to Cato's mission. But suddenly he's an existential threat! Cato and Cato's chairman Bob Levy didn't seem to have a huge problem with Ted Olson, a Solicitor General under G.W. Bush, when he was at Cato arguing for gay marriage on constitutional grounds. Andrew Napolitano is a stout libertarian who put a ton of Cato guys on Freedom Watch, his recently cancelled show on Fox Business. Cato executive VP David Boaz seems to get along pretty well, ideologically and otherwise, with Napolitano in this recent clip. Nancy Pfotenhauer, a former G.W. Bush and John McCain campaign operative, strikes me as a classic right-leaning fusionist, of which there are not a few at Cato. That she was married for a while to Cato senior fellow Dan Mitchell I think suggests that she does not inhabit an ideological/institutional universe foreign to Cato, as does the fact that the Independent Women's Forum, of which Pfotenhauer was for years the president, is currently run by Cato alum Carrie Lukas. Kevin Gentry is a hard-core Virginia Republican Party operative with whom I worked back when I was at the Institute for Humane Studies and the Mercatus Center. He's a fundraiser.
And, hey, what about IHS and Mercatus? I'll get to that in a second. One more thing about the board. The new members, except maybe for Napolitano, are indeed both Koch and GOP operatives. They certainly represent a bid for control. And they displaced several of Cato's most generous and involved long-time donors. I can understand why the current management is outraged. My point is that the new board members' brand of odious right-fusionist politics isn't obviously incompatible with Cato's mission, or significantly different from David Koch's.
The way Cato has so eagerly jumped on the Koch-bashing bandwagon in its hour of crisis strikes me as both transparently opportunistic and damaging to the broader libertarian movement. Charles Koch is the chairman of the board at the Institute for Humane Studies which as far as I can see has not become a whit less libertarian in orientation over the past several years. When I worked there, Charles Koch was also chairman of the Mercatus Center's board and he's on the board currently (but I can't tell from the Mercatus website who the chair is, if they have one.) A number of Mercatus' policy staff once worked at Cato and they don't seem to have changed their ideological orientation at all. Is Cato's management now arguing that Mercatus' scholars labor under a cloud of partisanship which threatens the independence and integrity of their work? Is Cato's management arguing that IHS's libertarian principles are now suddenly threatened by Charles Koch's money and leadership? Cato has worked closely with IHS for decades, and has long been a proud host each summer of a number of IHS Charles G. Koch Summer Fellows. Cato's worries about Charles Koch's baleful un-libertarian influence are completely new to me! That CGK is a partisan threat to an independent libertarian perspective is now a very popular idea at Cato that coincides exactly and suspiciously with the onset of CGK's attempt to capture control of the institution he co-founded. If David Koch is such a danger, why wasn't he one last year? As John Stossel used to say, "Gimme a break!"
I like the old Cato board members more than the new Cato board members. And I do suspect that a Koch-controlled Cato would work more closely with the Republican Party, which I don't at all like. Yet I've seen very little evidence that a Koch-controlled Cato would look a lot different ideologically than Cato does currently. However, there's every reason to believe that most of the current management would be pushed out of a Koch-controlled Cato, which I suspect is really the current management's biggest worry. The argument that widespread knowledge of actual Koch control would delegitimize Cato's work seems to me quite weak. The facts that Charles Koch co-founded Cato and that David Koch has been on the board for years and years was more than proof enough for anyone inclined to write off Cato as a Koch-run organ of the oligarchy before the coup attempt. Should the Kochs succeed, nothing much will change in this regard. The right way to look at the PR question is that the takeover attempt is temporarily a huge PR win for Cato, scored at the expense of other Koch-affiliated institutions. If Crane and Co. succesfully thwart the takeover, they'll be able to enjoy the PR boost for a good while longer.
The argument that Koch control of Cato would threaten the intellectual independence of Cato scholars also seems weak to me. This is in part because I don't know of any such problem at Mercatus, the most closely analogous Kochtopus institution, and in part because I doubt that the intellectual independence of Cato scholars is among the current management's main priorities.
All that said, I think it's better for libertarians if some prominent libertarian institutions remain outside the Kochtopus and that Julian Sanchez's presignation letter doesn't kick into effect. Still, this isn't a battle between good and evil, and the stakes are probably lower than you think. Of course, nobody likes to be on the wrong side of creative destruction's wrecking ball, but it can be indispensable and revitalizing, even for ideological movements.
Picture courtesy of the author.
Study: Unattractive people far overestimate their looks
The finding is remarkably similar to the Dunning-Kruger effect, which describes how incompetent people tend to overestimate their own competency.
- Recent studies asked participants to rate the attractiveness of themselves and other participants, who were strangers.
- The studies kept yielding the same finding: unattractive people overestimate their attractiveness, while attractive people underrate their looks.
- Why this happens is unclear, but it doesn't seem to be due to a general inability to judge attractiveness.
There's no shortage of disparities between attractive and unattractive people. Studies show that the best-looking among us tend to have an easier time making money, receiving help, avoiding punishment, and being perceived as competent. (Sure, research also suggests beautiful people have shorter relationships, but they also have more sexual partners, and more options for romantic relationships. So call it a wash.)
Now, new research reveals another disparity: Unattractive people seem less able to accurately judge their own attractiveness, and they tend to overestimate their looks. In contrast, beautiful people tend to rate themselves more accurately. If anything, they underestimate their attractiveness.
The research, published in the Scandinavian Journal of Psychology, involved six studies that asked participants to rate the attractiveness of themselves and other participants, who were strangers. The studies also asked participants to predict how others might rate them.
In the first study, lead author Tobias Greitemeyer found that the participants who were most likely to overestimate their attractiveness were among the least attractive people in the study, based on average ratings.

Ratings of subjective attractiveness as a function of the participant's objective attractiveness (Study 1)
Greitemeyer
"Overall, unattractive participants judged themselves to be of about average attractiveness and they showed very little awareness that strangers do not share this view. In contrast, attractive participants had more insights into how attractive they actually are. [...] It thus appears that unattractive people maintain illusory self‐perceptions of their attractiveness, whereas attractive people's self‐views are more grounded in reality."
Why do unattractive people overestimate their attractiveness? Could it be because they want to maintain a positive self-image, so they delude themselves? After all, previous research has shown that people tend to discredit or "forget" negative social feedback, which seems to help protect a sense of self-worth.

NBC
To find out, Greitemeyer conducted a study that aimed to put participants in a positive, non-defensive mindset before rating attractiveness. He did that by asking participants questions that affirmed parts of their personality that had nothing to do with physical appearance, such as: "Have you ever been generous and selfless to another person?" Yet, this didn't change how participants rated themselves, suggesting that unattractive people aren't overestimating their looks out of defensiveness.
The studies kept yielding the same finding: unattractive people overestimate their attractiveness. Does that bias sound familiar? If so, you might be thinking of the Dunning-Kruger effect, which describes how incompetent people tend to overestimate their own competency. Why? Because they lack the metacognitive skills needed to discern their own shortcomings.
Greitemeyer found that unattractive people were worse at differentiating between attractive and unattractive people. But the finding that unattractive people may have different beauty ideals (or, more plainly, weaker ability to judge attractiveness) did "not have an impact on how they perceive themselves."
In short, it remains a mystery exactly why unattractive people overestimate their looks. Greitemeyer concluded that, while most people are decent at judging the attractiveness of others, "it appears that those who are unattractive do not know that they are unattractive."
Unattractive people aren't completely unaware
The results of one study suggested that unattractive people aren't completely in the dark about their looks. In the study, unattractive people were shown a set of photos of highly attractive and unattractive people, and they were asked to select photos of people with comparable attractiveness. Most unattractive people chose to compare themselves with similarly unattractive people.
"The finding that unattractive participants selected unattractive stimulus persons with whom they would compare their attractiveness to suggests that they may have an inkling that they are less attractive than they want it to be," Greitemeyer wrote.
Helmet worn at home shrank man's brain tumor by a third
The new brain tumor treatment targets a cancer that kills 75% of patients within a year.
This article was originally published on our sister site, Freethink.
A new brain tumor treatment appeared to shrink a man's aggressive glioblastoma tumor by nearly a third — and all he had to do was wear a noninvasive helmet at home.
The challenge: Glioblastoma is a rare but aggressive type of brain cancer that is almost always fatal in adults — 75% of patients die within a year of diagnosis, and only 5% live more than five years.
Treatment usually starts with risky surgery to remove the bulk of the brain tumor, after which a patient might undergo chemo or radiation therapy.
"Our results…open a new world of non-invasive and nontoxic therapy for brain cancer."
DAVID S. BASKIN
Not only can the side effects of those treatments hurt a patient's quality of life, but the treatments themselves can't actually cure the brain cancer — they just buy the patient a little more time.
Why it matters: Survival rates for glioblastoma have remained mostly stagnant over the past few decades, meaning our ability to treat the deadly brain cancer isn't getting much better.
If that doesn't change, we'll continue to lose about 200,000 people to the disease every year, worldwide.
New brain tumor treatment: In a past study, researchers at Houston Methodist Neurological Institute found they could kill glioblastoma cells in the lab by subjecting them to oscillating magnetic fields, which they created by using electricity to rotate magnets in a precise way.
They believe the fields disrupt the transportation of electrons during the process used to create energy for cells. However, compounds produced by tumor cells are needed to trigger this disruption, meaning healthy cells should be spared while glioblastoma cells die.
The case study: In 2019, the researchers received approval under the FDA's compassionate use protocol to test the therapy on a man whose brain tumor wasn't responding to aggressive cancer treatments.
"Imagine treating brain cancer without radiation therapy or chemotherapy."
DAVID S. BASKIN
Over the course of three days, they trained the man and his wife how to deliver the therapy using a helmet equipped with three rotating magnets.
They then sent him home with the helmet and instructions to administer the brain tumor treatment for two hours every day at first and then work his way up to six hours.
The results: The man used the helmet for 36 days before suffering an unrelated head injury that led to his death. His family gave the researchers permission to autopsy his brain, and they found that his tumor had shrunk by 31% since the start of study.
"Thanks to the courage of this patient and his family, we were able to test and verify the potential effectiveness of the first noninvasive therapy for glioblastoma in the world," corresponding author David S. Baskin said in a press release.
Looking ahead: While this study is encouraging, the researchers will need to prove their brain tumor treatment can help more than a single patient.
The unlucky head injury also means we don't know if shrinking the tumor in the short-run improves survival rates. But if it can, the helmet could mark a turning point in the battle against glioblastoma.
"Imagine treating brain cancer without radiation therapy or chemotherapy," Baskin said. "Our results in the laboratory and with this patient open a new world of non-invasive and nontoxic therapy for brain cancer, with many exciting possibilities for the future."
Robots may be more like animals than humans
Meet MIT's Kate Darling, a robot ethicist who says that we should rethink our relationship with robots.
This article was originally published on our sister site, Freethink.
We're nearly a quarter into the 21st century, and by now, the Terminator-style portrayal of robots taking over the world has become a tired cliche. While it's seductive, most of us are aware that this isn't (likely) the future of intelligent life. But what will that look like, then?
According to Kate Darling, a robot ethicist at MIT and author of "The New Breed: What Our History With Animals Reveals About Our Future with Robots," the answer is right in front of us: animals.
While we have traditionally viewed robots as human-like, Darling believes the more apt comparison is seeing them as a different kind of "animal."
When we expect a robot to behave like a human, it's a very disappointing experience.
KATE DARLING
Robots will increasingly occupy shared spaces with humans, social robots will take off, and the questions around how humans should treat and interact with robots has never been more critical, Darling argues.

Her point isn't that robots and animals are the same or that they should be used exactly the same way, but that we should be open to the different ways we can collaborate with robots, harnessing their diverse range of skills and abilities — as we do with animals.
I spoke to Darling about how a robot's design affects our interaction with it, why we should stop worrying about robots replacing humans, and more. Here is our conversation, edited and condensed for clarity.
Why have robots traditionally been designed to look like humans? What is the thinking behind that?
We've always been fascinated with recreating ourselves. We had automata back in ancient times that were recreations of human bodies that could move around. Even the earliest artificial intelligence researchers started out with a goal of recreating human intelligence.
With robots and AI, in particular, they are machines that can sense and think and make autonomous decisions and learn. So we tend to automatically compare them to ourselves as well because of our inherent tendency to compare everything to ourselves. And traditionally, a lot of robots have been human-shaped — even though that's not necessarily the most practical form.
What are the problems with this human-like design?
So there's this subconscious comparison of robots to humans that has been enhanced by the design. I think it doesn't make sense. First of all, AI is not like human intelligence — robots don't have the same skills as people. So oftentimes when we expect a robot to behave like a human, it's a very disappointing experience. That's not to say that robots and AI aren't smart, just that they have a very different type of intelligence and skill than people do.
Our question shouldn't be, 'at what point can we recreate human ability and human skill in a robot?' The question is, 'why would we want to do that in the first place when we can create something different?'
KATE DARLING
Also, this comparison really limits us. The early AI researchers were trying to recreate a human brain and human intelligence, but that's not where we've ended up. And so our question shouldn't be, "at what point can we recreate human ability and human skill in a robot?" The question is, "why would we want to do that in the first place when we can create something different?" Robots and AI don't think or behave like us, but they are very useful and very smart.
Instead, you suggest using animals as a way to think about robots. What are the parallels here?
There are so many fun parallels. For thousands of years, we've used animals as a supplement to human ability. Not because they do what we do, but because their skill sets are so different from ours.
We used oxen to plow our fields, we've used horses to let us travel around in new ways. In some ways, a horse-drawn carriage is the original semi-autonomous vehicle. We've used pigeons to carry mail or deliver medicine in ways similar to how we're using drones today. We used them to take aerial photographs. So they were the original hobby photography drone. We've used dolphins in the Navy to detect mines underwater or locate lost underwater equipment, which is a similar function to how we're starting to use underwater robots today.
But animals have feelings, and robots don't. How does this affect the way we do, or should, treat robots?
Right. So this is something that has always really fascinated me about human-robot interaction. What it actually says about how we treat other entities. Because in many cases we have not treated animals very well in partnering with them. And in fact, in Western society, we're often quite hypocritical about how we think about how we want to treat other beings and how we actually treat them.
So a lot of us think that we care about whether other beings feel or whether they have intelligence or whether they can suffer, but if you look at the history of animal rights in Western society, it quickly becomes apparent that we have only protected the animals that are cute or that we care about culturally, or that we have some emotional relationship to.
What's so interesting about human-robot interaction research is it's showing that we treat robots in very similar ways, where we treat some of them that we have no emotional connection to as tools and products, and then others we treat as companions or develop emotional attachments to.
So it's entirely possible that if we don't stop and think about this, that we may default to caring more about a robot that feels nothing than we might about a slimy slug in our backyard. It's actually a unique moment in time where we could stop and think and maybe nudge our behavior in a way that's more consistent with what we feel our values are.
It's interesting you say that because I was thinking the opposite — that we might treat animals kindly, but we sometimes treat robots (especially social ones) with detachment. And there can be harmful effects of this. For instance, if we talk "down" to an Amazon Alexa, it has implications for the way we might treat women in our lives.
So I do think there actually is possibly an argument for treating technology with some kindness, as ridiculous as that sounds. Even though the technology can't feel and we're not anywhere close to having sentient robots or robot consciousness.
We're seeing robots being designed in a very lifelike way, certain robots, and robots that can respond to being kicked, for example, with a simulation of pain. And one question is, even though the robot can't feel, should we let people kick them?
KATE DARLING
But there's questions around our own behavior. So if you get used to barking commands at Alexa, or your kid gets used to barking commands at Alexa, you could get used to barking commands at women or women named Alexa or other people. Parents have raised enough concern about this that a lot of these home voice assistant companies have released features to turn on a magic word feature so that it makes Alexa only respond if you say "please" and "thank you," for example.
But then you get into all sorts of questions with different designs of robots. Increasingly we're seeing robots being designed in a very lifelike way, certain robots, and robots that can respond to being kicked, for example, with a simulation of pain. And one question is, even though the robot can't feel, should we let people kick them?
And what if we had a real-life Westworld theme park, where people could go and do anything they want to life-like robots? Is that a healthy outlet for violent behavior or (does it) train people's cruelty muscles? I don't have an answer to the question, but it is a question that is going to be raised very soon.
Right. So on the flip side, maybe we could go in the other direction and make them seem less lifelike at all and more like just a neutral object that we don't associate with any kind of life?
We can try. What we're also seeing in the research is that it's really hard to turn off this tendency that we have to treat robots like living things. Even something as simple as the Roomba vacuum cleaner — just because it's moving around on its own, people will name the Roomba. People will feel bad for the Roomba when it gets stuck. So it's a very difficult human tendency to counteract.
And in fact, a lot of animal researchers and nature researchers have moved away from the idea that we have to get rid of how we project ourselves onto animals and have said, "Okay, this is something that is there, we just need to be very aware of it, and we can nudge our behavior in certain directions, but we're not going to get rid of the tendency entirely."
And maybe that's a good thing because it means that we can relate to animals in certain ways that might be actually beneficial for humans. So having therapy dogs or having pets as companionship can actually be a very positive thing for people.
Or military robots, where soldiers are becoming emotionally attached to the bomb disposal units that they work with. Which, at first blush, you would say, "Okay, that's terrible. We don't want soldiers to be risking their lives or behaving in an inefficient way on a battlefield because they've developed an emotional connection to a robot."
But at the same time, if you look at the history of the role that animals have played in war, yes, soldiers sometimes made bad decisions based on wanting to save their dog or their horse on the battlefield. But the animals brought so much emotional comfort to soldiers in very stressful situations that it's not clear to me that it's necessarily a bad thing, even if we could prevent it.
Many people fear that robots are going to threaten us in some way, or replace us. How does shifting it to a view of an animal change the way that we look at that issue?
Particularly in Western society, we have this idea that there's this constant threat of robots rising up against us or coming to replace us. And in part that comes from this comparison of robots to humans — and it's very limiting. It influences a lot of our conversations from what intelligence is, to our conversations about robots and jobs and robots replacing people one-to-one.
Using the animal analogy helps us step away from this fear of being replaced. And the animals obviously have not done that. Animals have disrupted society. They have created completely different workplaces for people. They have revolutionized farming and transportation and all sorts of things that technology is also going to disrupt — but it's not the same type of fear that we've had with animals, about animals rising up against us.
Fear with robots is also quite misplaced, given that we're not anywhere close to having artificial superintelligence or any type of science-fictional scenario that gets a lot of attention in the press — it's actually the wrong question to be worried about.
What's been the driving force behind your research? The question you are most interested in?
The thing that blows my mind is our tendency to treat robots like they're alive, even if we know perfectly well that they're just machines. Just a few weeks ago I got this baby harp seal robot called a PARO. It's a medical device that's used with dementia patients in a nursing home. And it looks like a very cute baby seal. It doesn't do very much, it just kind of responds to touch. Makes these little movements and little sounds. I was showing it off to the group of roboticists that I work with. They create social robots — they specifically design robots that give off cues like this.
They were all like, "Oh, it's so cute. Oh, look, it's doing XYZ!" So even the people who build the programs are not immune. In fact, still very susceptible to being swayed by these artificial cues that we've programmed into these machines. It seems to be such a deep biological tendency that we have. It always surprises me, even though I've seen it happen and there's so much research on it.
I think that we're not talking about it enough and not acknowledging enough of the incredible social tendency that we have that is going to impact how we integrate these machines because we treat them so differently than other devices.
One man visited all 2964 bus stops in San Francisco — for science
Americans don't like to ride the bus. There are ways to fix that.