Leaders Are More Likely To Be Sociopaths
Paul R. Lawrence is a Professor Emeritus of Harvard Business School, where he served nine years as chairman of the Organizational Behavior area and also as chairman of both the MBA and AMP programs. His research, published in 25 books and numerous articles, has dealt with the human aspects of management, organizational change, organization design, human nature, and leadership. His 1967 book, Organization and Environment (written with Professor Jay Lorsch), added "contingency theory" to the vocabulary of students of organizational behavior. Recently he has, with others, made a comparative study of Soviet management practices that was published in 1990 as Behind the Factory Walls: Decision Making in Soviet and U.S. Enterprises.
Question: Are leaders more likely to be sociopaths?
Paul Lawrence: Well the question becomes you know, do these people without conscience, let’s call them PWOC’s is a rather shorthand way for that. Talking about them getting into leadership positions and they probably get into them out of all proportion to a percentage often population, we estimate they maybe 2% to 4% of the population are such people. And we think they get into the leadership positions maybe 8% or 10% of the time, but you know, any percent is a mess because they can wreak havoc in exploiting other people. They probably get there more than others because it’s the only thing they’re looking for in life. You know we got normal people have got a lot of things they’re trying to get in life. They’re trying to have healthy families and good relationships with friends and so forth. And if you’re aren’t paying any attention to that, you can probably get to a power position more readily, because you can be pretty cunning and pretty smart, and a lot of them are very charming. You know, they don’t come across, a lot of them, as evil, they come across as very charming people and they can worm their way into those spots and we have to be cautious.
A lot of history records the fact that such people have gotten into important positions. The whole Dark Ages was a period in which those people got into leadership positions in governments on a large-scale basis and there was a tremendous amount of warfare and suffering during those times. I think the whole Renaissance has been an effort to move away from that kind of leadership. I think the effort to put together the Constitution of the United States, which I discuss at some length in the book, was a effort to create a government that can protect itself against such kind of leadership. Making it... by balancing the power and not getting power concentrated in any one office is a way of avoiding that kind of leadership.
So it’s come up throughout history and that is thoroughly discussed and we see it not only, obviously, in business, we can name and do name prominent leaders in business who are highly suspect of having that feature.
But the point is, they do get into some of those roles and... For instance, take the scandal in Wall Street with the crash in the market and the resulting worldwide depression. I discussed that in a chapter which I come out with a fairly bold statement which is still not the way in which the government is defining what happened. There were a few, there didn’t have to be many, and they didn’t necessarily didn’t have to be all CEOs of the big banks who saw the opportunity to buy up subprime mortgages—which were really written without much interest in whether they recipient could repay them and so were subject to a lot of foreclosures—but the banks that wrote them knew they could instantly sell them to the Wall Street banks because they were collecting these mortgages wholesale so they could slice and dice them up into a sort of a mysterious packages and sell them as Triple-A bonds certified by the grading agencies, and collect 100% on the dollar for those bonds to people who were trustees of pension funds and endowments, and we sitting in responsibility to make those investments in bonds, by law they had to do it so those bonds looked pretty good to them. They didn’t realize that the bonds were probably... they were phony. They were really worth maybe only 50% of their face value at the moment they bought them. And that was the con, the absolute fraud that was pulled off. And we still don’t have a clear understanding by the public or even by the Department of Justice that that is what happened, and we should be prosecuting those people and getting the evidence out that will prove that those are criminal actions.
Question: What would we do if genetics could pinpoint someone as a psychopath?
Paul Lawrence: Well, obviously, that’s an extremely difficult question. It’s going to raise a lot of moral questions. What do we do with people that are positively identified by DNA of being psychopathic types? And these are characteristics that they didn’t ask for, they didn’t choose them, they were simply an accident of birth, yet nevertheless makes them a hazard to other people that they have to find some what to protect themselves from, somewhat to constrain people, so they can’t do things like Hitler did to so many people in the world.
Well, you know, I don’t have all the answers to that. I have thought about it, a lot of people thought about it. I think, you know, it is one possibility when you’re considering candidates for a powerful position and considering who is going to get a job, you can say, "Well, maybe we ought to test them and see that they get a license, so that they’re qualified," the way we do with people that are going to be airline pilots or the people that are going to be a number of professional roles like doctors and lawyers and so forth—they have to produce a test for being licensed for those roles. Well, if it’s a powerful role, we could say that part of the licensing process is to test your DNA to see whether or not you’re, you know, an innate psychopath because we do not want such people in such power positions. "You’ve got to go find something else to do in this world besides that because we cannot... we cannot trust you with that kind of power. "
As just one idea. I don’t say it’s the answer, I think we’ve got to think of a lot of ideas and put our minds to work on it.
Recorded on July 28, 2010
Interviewed by Max Miller
People without a conscience don’t need to satisfy the drive to bond and can focus entirely on the drive to acquire, making them more likely to seek leadership positions.
Big ideas.
Once a week.
Subscribe to our weekly newsletter.
The secret to how scorpions, spiders, and ants puncture tough skin
These animals to grow scalpel-sharp and precisely shaped tools that are resistant to breaking.
My colleagues and I call these “heavy element biomaterials," and in a new paper, we suggest that these materials make it possible for animals to grow scalpel-sharp and precisely shaped tools that are resistant to breaking, deformation and wear.
Because of the small size of things like ant teeth, it has been hard for biologists to test how well the materials they are made of resist fractures, impacts and abrasions. My research group developed machines and methods to test these and other properties, and along with our collaborators, we studied their composition and molecular structure.
We examined ant mandible teeth and found that they are a smooth mix of proteins and zinc, with single zinc atoms attached to about a quarter of the amino acid units that make up the proteins forming the teeth. In contrast, calcified tools – like human teeth – are made of relatively large chunks of calcium minerals. We think the lack of chunkiness in heavy element biomaterials makes them better than calcified materials at forming smooth, precisely shaped and extremely sharp tools.
To evaluate the advantages of heavy element biomaterials, we estimated the force, energy and muscle size required for cutting with tools made of different materials. Compared with other hard materials grown by these animals, the wear-resistant zinc material enables heavily used tools to puncture stiff substances using only one-fifth of the force. The estimated advantage is even greater relative to calcified materials that – since they can't be nearly as sharp as heavy element biomaterials - can require more than 100 times as much force.
Biomaterials that incorporate zinc (red) and manganese (orange) are located in the important cutting and piercing edges of ant mandibles, worm jaws and other 'tools.' (Robert Schofield, CC BY-ND)
Why it matters
It's not surprising that materials that could make sharp tools would evolve in small animals. A tick and a wolf both need to puncture the same elk skin, but the wolf has vastly stronger muscles. The tick can make up for its tiny muscles by using sharper tools that focus force onto smaller regions.
But, like a sharp pencil tip, sharper tool tips break more easily. The danger of fracture is made even worse by the tendency for small animals to extend their reach using long thin tools – like those pictured above. And a chipped claw or tooth may be fatal for a small animal that doesn't have the strength to cut with blunted tools.
But we found that heavy element biomaterials are also particularly hard and damage-resistant.
From an evolutionary perspective, these materials allow smaller animals to consume tougher foods. And the energy saved by using less force during cutting can be important for any animal. These advantages may explain the widespread use of heavy element biomaterials in nature – most ants, many other insects, spiders and their relatives, marine worms, crustaceans and many other types of organisms use them.
What still isn't known
While my team's research has clarified the advantages of heavy element biomaterials, we still don't know exactly how zinc and manganese harden and protect the tools.
One possibility is that a small fraction of the zinc, for example, forms bridges between proteins, and these cross-links stiffen the material – like crossbeams stiffen a building. We also think that when a fang bangs into something hard, these zinc cross-links may break first, absorbing energy to keep the fang itself from chipping.
We speculate that the abundance of extra zinc is a ready supply for healing the material by quickly reestablishing the broken zinc-histidine cross-links between proteins.
What's next?
The potential that these materials are self-healing makes them even more interesting, and our team's next step is to test this hypothesis. Eventually we may find that self-healing or other features of heavy element biomaterials could lead to improved materials for things like small medical devices.
Robert Schofield, Research Professor in Physics, University of Oregon
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Early humans migrated in and out of Arabia — based on the climate
Whenever the climate cooled, our hominin ancestors would set up shop in the Arabian Peninsula and vanish again when the planet warmed up.
- Despite being the only bridge early hominin species could have crossed to enter Eurasia, the Arabian Peninsula bears little to no evidence of early human occupation.
- Subverting expectations, a recent excavation in the Nefud Desert found tools dated to different stages of hominin evolution.
- It turns out that early humans moved in and out of the peninsula whenever the climate allowed them to do so.
We know a good deal about how early hominins — the branch of our evolutionary tree that split from chimps and bonobos up to seven million years ago — moved around their place of origin in eastern Africa. Fossils indicate they eventually made it to Eurasia through the Levant area of western Asia. This luscious green region, located on the easternmost edges of the Mediterranean, served our ancestors as the highway between two continents, one they would cross many times — in both directions.
Given how the Arabian Peninsula, a landmass that encapsulates the Levant, was our ancestors' one and only access point to the wider world, one would think evidence of their presence would stretch from Israel to Yemen. However, this is not the case. While the Levant is littered with prodigious digging sites, the paleontological and paleoenvironmental records of the peninsula's interior have remained hauntingly empty and fragmented.
That is, until today. According to a new paper published in Nature, excavations in the Nefud Desert in Saudi Arabia unearthed traces of both human and Neanderthal occupation. By shrinking their search window to wetter periods on the geologic time scale — what the authors refer to as "brief 'green' windows of reduced aridity approximately 400, 300, 200, 130-75 and 55 thousand years ago" — archaeologists were able to find a number of Low to Middle Pleistocene Age tools used by proto-humans that ventured into the region after heavy rainfall transformed the desert into a wide-open grassland.
Digging in the desert
To say the interior parts of the Arabian Peninsula have never yielded evidence of hominins would not be entirely true. The earth here hides evidence of hominins, just not of hominin settlements. Whenever archaeologists make a discovery, it is usually the remnants of a makeshift workshop site, which are very different from the cave and rock shelters that can be stumbled upon throughout the more hospitable Levant region. Did we look hard enough, though?
Excavations in northern Saudi Arabia at a site called Khall Amayshan 4 (KAM 4) suggest we did not. On the surface, the site looks like any other part of the Nefud Desert. Below ground, however, sedimentary rocks and interdunal basins tell of a time when this place used to contain a network of lakes and rivers. Such a clear and detailed preservation of this time in geologic history cannot be found anywhere else on the peninsula and was formed serendipitously when a sand dune slid atop the basin to protect it from erosion.
We know the shores at KAM 4 have been occupied by hominins several times during the Pleistocene because different phases of lake formation correspond with a "distinct lithic assemblage" — an archaeological term for stone tools and their byproducts, of which KAM 4 is filled to the brim. A 400,000-year-old assemblage contains small hand axes made from slabs of quartzite, while a 55,000-year-old deposit contains a number of Levallois flakes.
These tools can teach us several things about the hominins that made and used them. In terms of appearance and design, some assemblages at KAM 4 seem to have more in common with those found in Africa than those from the Levantine woodlands, suggesting a different migration out of Africa might have taken place — one that ended up in Arabia rather than Eurasia. "It seems," the researchers write, "that much of Northeast Africa and Southwest Asia shared similar material culture."
Climate change and migratory patterns
Hominin species did not hop continents at random; their migratory patterns were a response to the changing climate of the Pleistocene. Judging from the results of their excavation at KAM 4, researchers identified no less than five distinct movements into the Arabian Peninsula. Given that most of the tools were dated to periods that saw increased rainfall, it is safe to say our ancestors only migrated into the desert when it became hospitable enough for them to do so.
Conversely, researchers were unable to find any tools that would have been left during interglacial periods. It seems that, as the region became warmer and more arid, the hominin populations that had made their home inside the peninsula dispersed once again. The unstable environmental conditions that plagued the peninsula may well explain the fragmentation of its fossil evidence, a problem that researchers in the relatively static Levantine woodlands rarely encounter.
Because climate change and the accompanying mass migratory movements can actually erase the vast majority of a species' fossil record, these findings bear relevance to modern readers. This year's UN climate report warns of Arctic summers without ice and tropical storms that will become even more ubiquitous than they already are. What if hundreds of thousands of people have to leave their homes either temporarily or indefinitely?
A brand-new blue may be the most eye-popping blue yet
Meet a spectacular new blue—the first inorganic new blue in some time.
- Combine yttrium, indium, and manganese, then heat and serve.
- The new blue was synthesized by chemists at Oregon State University.
- YInMn Blue is the latest character in the weird history of the color blue.
The color you're looking at in the unretouched photo above is a stunning new blue called "YInMn Blue." It's the first new inorganic blue pigment developed in hundreds of years. "YInMn Blue" is a contraction of Yttrium, Indium, and Manganese, and the pigment was invented by a team of chemists led by Mas Subramanian at Oregon State University (OSU).
The color was invented in 2009, but it took until last spring for the EPA to approve it for general use — the agency refers to it as "Blue 10G513." Before that, in 2016, the Shepherd Color Company had licensed it for exterior use, and knockoffs of the color popped up here and there in Etsy offerings. It even inspired a new Crayola color called "Bluetiful." Appropriate.
Invisible blue

So, um the color of the sky is...?
Credit: Constant Loubier/Unsplash
YInMn Blue is the latest character in an odd story: humanity's relationship with the color blue.
For a long time, humans apparently took no note of blue, which is weird. Though blue isn't especially common in vegetation and stone, there's no other color that so envelops us — in the sky above and on the face of the oceans that surround us. (BTW, the late George Carlin once lamented a paucity of blue foods.)
There are no ancient European year-old cave paintings with blue pigments, though it does appear in some African cave art. There's no mention of it in the Bible. Though there are plenty of references in Homer's Odyssey to white and black, and a few to red and yellow, there's no blue. He refers to the color of the sea as "wine-dark."
Some historians hypothesize that early humans might have been color-blind, capable only of seeing black, white, red, and eventually yellow and green. Perhaps they just weren't very interested in the idea of color altogether.
Maybe, though, a more likely explanation is that lacking a concept and a word for blue, ancient people lacked a frame of reference for understanding what they were seeing. Radiolab did a fascinating episode about this possibility.
A BBC documentary found that people from a Namibian tribe with no separate words for green and blue couldn't differentiate green from blue squares, though there's some controversy about the experiment. What is true, though, is that Eskimos see more types of snow because they have 50 words for it. (The word "Eskimo" groups together the people of the Inuit and Yupik families.) We see just a few.
Blue arrives

Lapis luzuli
Credit: Geert Pieters/Unsplash
While Homer, et al., were stumbling around clueless, it seems that the first folks to get blue were the ancient Egyptians, who were entranced by the semiprecious Afghan stone lapis lazuli about 6,000 years ago. They gave the color a name—ḫsbḏ-ỉrjt—and used the stone liberally in jewelry and headdresses.
The Egyptians even attempted to make paint from the mineral, but failed. In 2,200 B.C. they finally succeeded at producing a light-blue paint, cuprorivaite or "Egyptian blue," from heated limestone, sand, and azurite or malachite. Egypt's precious blue pigments eventually became valued by royalty in Persia, Mesoamerica, and Rome.
The earliest successful lapis lazuli paint—and ultimately Europe's first great blue—appeared in 6th century Buddhist paintings from Bamiyan, Afghanistan. Imported into Europe in the 14th and 15th centuries, ultramarine—from ultramarinus, or "beyond the sea"—was used only in expensive commissioned artwork until a French chemist developed a cheaper, synthetic version in 1826. True ultramarine was both so coveted and pricey that, according to the Metropolitan Museum, Vermeer impoverished his family to purchase it, and there's a story that one of Michelangelo's paintings, "The Entombment," was left unfinished because he couldn't afford the ultramarine it required. At the other end of the cost spectrum was the affordable blue dye indigo, made from the plant Indigofera tinctoria, and imported to Europe in the 16th-century.
Over time, more blues appeared. In 1706, German dye-maker Johann Jacob Diesbach came up with Berliner Blau, or Prussian blue, accidentally when potash he was using to make red pigment was contaminated with animal blood that paradoxically turned it blue. 1802 saw the invention of cobalt blue, based on the 8th- and 9th-century blue pigments used in Chinese porcelain, by French chemist Louis Jacques Thénard. Cerulean blue—from caerulum, meaning "heave" or "sky"—was the last major blue introduced before YInMn Blue. It was invented by Albrecht Höpfner in 1789.
Back to the new blue
The discovery of YInMn Blue occurred when chemistry grad student Andrew Smith was heating manganese oxide to approximately 1200 °C (~2000 °F) to investigate its electronic properties. To his surprise, what emerged from the heat was a brilliant blue compound. Recalls Subramanian: "If I hadn't come from an industry research background — DuPont has a division that developed pigments, and obviously, they are used in paint and many other things — I would not have known this was highly unusual, a discovery with strong commercial potential."
Subramanian knew, he told NPR in 2016, "People have been looking for a good, durable blue color for a couple of centuries." OSU art students soon began experimenting with the new color, incorporating it in watercolors and printing. In 2012, Subramanian's team received a patent for YInMn Blue.
Bonus: Previous blue pigments are prone to fading and are often toxic. These are problems that don't afflict YInMn Blue. "The fact that this pigment was synthesized at such high temperatures signaled that this new compound was extremely stable, a property long sought in a blue pigment," says Subramanian in the study documenting YInMn Blue.
Subramanian and his colleagues have been developing colors ever since, including new bright oranges, new purples, and turquoises and greens. Currently, they're on the hunt for a chromatic Holy Grail: a stable, heat-reflective, and brilliant, red. It's a challenge. While red is among the oldest colors, Subramanian calls the shade he seeks "the most elusive color to synthesize."
TikTok tics: when Tourette's syndrome went viral
Once limited in range, mass hysteria can now spread across the globe in an instant.
- Mass psychogenic illness, also known as mass hysteria, is when a group of people manifest physical symptoms from imagined threats.
- History is littered with outbreaks of mass hysteria.
- Recently, alleged cases of Tourette's syndrome appeared all over the world. Was it real or mass psychogenic illness?
While the term is often avoided for fear of ridiculing something more serious, mass psychogenic illness (MPI) — also known as mass sociogenic illness (MSI) or mass hysteria — is a real occurrence that can cause a variety of physical symptoms to manifest in groups of people despite the lack of any physical cause. Often compared to conversion disorder, in which emotional issues are "converted" into physical problems, MPI tends to occur among people who share anxieties, fears, and a sense of community. In the right group of people, it can spread like a virus.
A curious case of the condition related to TikTok videos shows both how imagined conditions can spread and how our modern media landscape presents new problems never even dreamt of in a time before the internet.
TikTok tics
In 2019, a strange slew of new Tourette's cases made its way into hospitals all over the world. Oddly, these were suddenly occurring in children well over the age of six, the age of typical onset. Most peculiar of all, many of the patients were exhibiting identical symptoms and tics. While many cases of Tourette's are similar, these symptoms were precisely the same.
As it turned out, the tics were also identical to those exhibited by one Jan Zimmermann, a 23-year-old YouTuber from Germany with Tourette's. On his channel, Gewitter im Kopf, he documents his daily life with the condition. All of the patients who suddenly claimed to have tics were fans of his or of similar channels on YouTube and TikTok.
There was nothing physically wrong with the large number of people who suddenly came down with Tourette's-like symptoms, and most of them recovered immediately after being told that they did not have Tourette's syndrome. Others recovered after brief psychological interventions. The spread of the condition across a social group despite the lack of a physical cause all pointed toward an MPI event.
Historical cases of mass hysteria
Of course, humans do not need social media to develop symptoms of a disease that they do not have. Several strange cases of what appears to have been mass hysteria exist throughout history. While some argue for a physical cause in each case, the consensus is that the ultimate cause was psychological.
The dancing plagues of the Middle Ages, in which hundreds of people began to dance until they were utterly exhausted despite apparently wishing to stop, are thought to have been examples of mass madness. Some cases also involved screaming, laughing, having violent reactions to the color red, and lewd behavior. Attempts to calm the groups by providing musicians just made the problem worse, as people joined in to dance to the music. By the time the dancing plague of 1518 ended, several people had died of exhaustion or injuries sustained during their dance marathon.
It was also common for nunneries to get outbreaks of what was then considered demonic possession but what now appears to be MPI. In many well recorded cases, young nuns — often cast into a life of poverty and severe discipline with little to say about it — suddenly found themselves "possessed" and began behaving in extremely un-nunlike fashion. These instances often spread to other members of the convent and required intervention by exorcists to resolve.
A more recent example might be the curious story of the Mad Gasser of Mattoon. During WWII in the small town of Mattoon, Illinois, 33 people awoke in the middle of the night to a "sweet smell" in their homes followed by symptoms such as nausea, vomiting, and paralysis. Many claimed to see a figure outside their rooms fleeing the scene. Claims of gassings rapidly followed the initial cases, and the police department was swamped with reports that amounted to nothing. The cases ended after the sheriff threatened to arrest anyone submitting a report of being gassed without agreeing to a medical review.
Each of these cases exhibits the generally agreed upon conditions for MPI: the people involved were a cohesive group, they all agreed on the same threats existing, and they were enduring stressful and emotional conditions that later manifested as physical symptoms. Additionally, the symptoms appeared suddenly and spread by sight and communication among the affected individuals.
Social diseases for a social media age
One point upon which most sources on MPI agree is the tendency of the outbreaks to occur among cohesive groups whose members are in regular contact. This is easy to see in the above examples: nuns live together in small convents, medieval peasants did not travel much, and the residents of Mattoon were in a small community.
This makes the more recent case that relies on the internet all the more interesting. And it's not the only one. Another MPI centered around a school in New York in 2011.
As a result, a team of German researchers has put forth the idea of a new version of MPI for the modern age: "mass social media-induced illness." It is similar to MPI but differs in that it is explicitly for cases driven by social media, in which people suffering from the same imagined symptoms never actually come into direct contact with one another.
Of course, these researchers are not the first to consider the problem in a digital context. Dr. Robert Bartholomew described the aforementioned New York case in a paper published in the Journal of the Royal Society of Medicine.
All this seems to imply that our online interactions can affect us in much the same ways as direct communication has for ages past and that the social groups we form online can be cohesive enough to cause identical symptoms in people who have never met. Therefore, we likely have not seen the last of "mass social media-induced illness."
