Once a week.
Subscribe to our weekly newsletter.
Psychology’s 10 greatest case studies – digested
These ten characters have all had a huge influence on psychology. Their stories continue to intrigue those interested in personality and identity, nature and nurture, and the links between mind and body.
These ten characters have all had a huge influence on psychology and their stories continue to intrigue each new generation of students. What’s particularly fascinating is that many of their stories continue to evolve – new evidence comes to light, or new technologies are brought to bear, changing how the cases are interpreted and understood. What many of these 10 also have in common is that they speak to some of the perennial debates in psychology, about personality and identity, nature and nurture, and the links between mind and body.
One day in 1848 in Central Vermont, Phineas Gage was tamping explosives into the ground to prepare the way for a new railway line when he had a terrible accident. The detonation went off prematurely, and his tamping iron shot into his face, through his brain, and out the top of his head.
Remarkably Gage survived, although his friends and family reportedly felt he was changed so profoundly (becoming listless and aggressive) that “he was no longer Gage.” There the story used to rest – a classic example of frontal brain damage affecting personality. However, recent years have seen a drastic reevaluation of Gage’s story in light of new evidence. It’s now believed that he underwent significant rehabilitation and in fact began work as a horse carriage driver in Chile. A simulation of his injuries suggested much of his right frontal cortex was likely spared, and photographic evidence has been unearthed showing a post-accident dapper Gage. Not that you’ll find this revised account in many psychology textbooks: a recent analysis showed that few of them have kept up to date with the new evidence.
Henry Gustav Molaison (known for years as H.M. in the literature to protect his privacy), who died in 2008, developed severe amnesia at age 27 after undergoing brain surgery as a form of treatment for the epilepsy he’d suffered since childhood. He was subsequently the focus of study by over 100 psychologists and neuroscientists and he’s been mentioned in over 12,000 journal articles! Molaison’s surgery involved the removal of large parts of the hippocampus on both sides of his brain and the result was that he was almost entirely unable to store any new information in long-term memory (there were some exceptions – for example, after 1963 he was aware that a US president had been assassinated in Dallas). The extremity of Molaison’s deficits was a surprise to experts of the day because many of them believed that memory was distributed throughout the cerebral cortex. Today, Molaison’s legacy lives on: his brain was carefully sliced and preserved and turned into a 3D digital atlas and his life story is reportedly due to be turned into a feature film based on the book researcher Suzanne Corkin wrote about him: Permanent Present Tense, The Man With No Memory and What He Taught The World.
Victor Leborgne (nickname “Tan”)
The fact that, in most people, language function is served predominantly by the left frontal cortex has today almost become common knowledge, at least among psych students. However, back in the early nineteenth century, the consensus view was that language function (like memory, see entry for H.M.) was distributed through the brain. An eighteenth century patient who helped change that was Victor Leborgne, a Frenchman who was nicknamed “Tan” because that was the only sound he could utter (besides the expletive phrase “sacre nom de Dieu”). In 1861, aged 51, Leborgne was referred to the renowned neurologist Paul Broca, but died soon after. Broca examined Leborgne’s brain and noticed a lesion in his left frontal lobe – a segment of tissue now known as Broca’s area. Given Leborgne’s impaired speech but intact comprehension, Broca concluded that this area of the brain was responsible for speech production and he set about persuading his peers of this fact – now recognised as a key moment in psychology’s history. For decades little was known about Leborgne, besides his important contribution to science. However, in a paper published in 2013, Cezary Domanski at Maria Curie-Sklodowska University in Poland uncovered new biographical details, including the possibility that Leborgne muttered the word “Tan” because his birthplace of Moret, home to several tanneries.
Wild Boy of Aveyron
The “Wild boy of Aveyron” – named Victor by the physician Jean-Marc Itard – was found emerging from Aveyron forest in South West France in 1800, aged 11 or 12, where’s it’s thought he had been living in the wild for several years. For psychologists and philosophers, Victor became a kind of “natural experiment” into the question of nature and nurture. How would he be affected by the lack of human input early in his life? Those who hoped Victor would support the notion of the “noble savage” uncorrupted by modern civilisation were largely disappointed: the boy was dirty and dishevelled, defecated where he stood and apparently motivated largely by hunger. Victor acquired celebrity status after he was transported to Paris and Itard began a mission to teach and socialise the “feral child”. This programme met with mixed success: Victor never learned to speak fluently, but he dressed, learned civil toilet habits, could write a few letters and acquired some very basic language comprehension. Autism expert Uta Frith believes Victor may have been abandoned because he was autistic, but she acknowledges we will never know the truth of his background. Victor’s story inspired the 2004 novel The Wild Boy and was dramatised in the 1970 French film The Wild Child.
Nicknamed ‘Kim-puter’ by his friends, Peek who died in 2010 aged 58, was the inspiration for Dustin Hoffman’s autistic savant character in the multi-Oscar-winning film Rain Man. Before that movie, which was released in 1988, few people had heard of autism, so Peek via the film can be credited with helping to raise the profile of the condition. Arguably though, the film also helped spread the popular misconception that giftedness is a hallmark of autism (in one notable scene, Hoffman’s character deduces in an instant the precise number of cocktail sticks – 246 – that a waitress drops on the floor). Peek himself was actually a non-autistic savant, born with brain abnormalities including a malformed cerebellum and an absent corpus callosum (the massive bundle of tissue that usually connects the two hemispheres). His savant skills were astonishing and included calendar calculation, as well as an encyclopaedic knowledge of history, literature, classical music, US zip codes and travel routes. It was estimated that he read more than 12,000 books in his life time, all of them committed to flawless memory. Although outgoing and sociable, Peek had coordination problems and struggled with abstract or conceptual thinking.
“Anna O.” is the pseudonym for Bertha Pappenheim, a pioneering German Jewish feminist and social worker who died in 1936 aged 77. As Anna O. she is known as one of the first ever patients to undergo psychoanalysis and her case inspired much of Freud’s thinking on mental illness. Pappenheim first came to the attention of another psychoanalyst, Joseph Breuer, in 1880 when he was called to her house in Vienna where she was lying in bed, almost entirely paralysed. Her other symptoms include hallucinations, personality changes and rambling speech, but doctors could find no physical cause. For 18 months, Breuer visited her almost daily and talked to her about her thoughts and feelings, including her grief for her father, and the more she talked, the more her symptoms seemed to fade – this was apparently one of the first ever instances of psychoanalysis or “the talking cure”, although the degree of Breuer’s success has been disputed and some historians allege that Pappenheim did have an organic illness, such as epilepsy. Although Freud never met Pappenheim, he wrote about her case, including the notion that she had a hysterical pregnancy, although this too is disputed. The latter part of Pappenheim’s life in Germany post 1888 is as remarkable as her time as Anna O. She became a prolific writer and social pioneer, including authoring stories, plays, and translating seminal texts, and she founded social clubs for Jewish women, worked in orphanages and founded the German Federation of Jewish Women.
Sadly, it is not really Kitty Genovese the person who has become one of psychology’s classic case studies, but rather the terrible fate that befell her. In 1964 in New York, Genovese was returning home from her job as a bar maid when she was attacked and eventually murdered by Winston Mosely. What made this tragedy so influential to psychology was that it inspired research into what became known as the Bystander Phenomenon – the now well-established finding that our sense of individual responsibility is diluted by the presence of other people. According to folklore, 38 people watched Genovese’s demise yet not one of them did anything to help, apparently a terrible real life instance of the Bystander Effect. However, the story doesn’t end there because historians have since established the reality was much more complicated – at least two people did try to summon help, and actually there was only one witness the second and fatal attack. While the main principle of the Bystander Effect has stood the test of time, modern psychology’s understanding of the way it works has become a lot more nuanced. For example, there’s evidence that in some situations people are more likely to act when they’re part of a larger group, such as when they and the other group members all belong to the same social category (such as all being women) as the victim.
“Little Albert” was the nickname that the pioneering behaviourist psychologist John Watson gave to an 11-month-old baby, in whom, with his colleague and future wife Rosalind Rayner, he deliberately attempted to instill certain fears through a process of conditioning. The research, which was of dubious scientific quality, was conducted in 1920 and has become notorious for being so unethical (such a procedure would never be given approval in modern university settings). Interest in Little Albert has reignited in recent years as an academic quarrel has erupted over his true identity. A group led by Hall Beck at Appalachian University announced in 2011 that they thought Little Albert was actually Douglas Merritte, the son of a wet nurse at John Hopkins University where Watson and Rayner were based. According to this sad account, Little Albert was neurologically impaired, compounding the unethical nature of the Watson/Rayner research, and he died aged six of hydrocephalus (fluid on the brain). However, this account was challenged by a different group of scholars led by Russell Powell at MacEwan University in 2014. They established that Little Albert was more likely William A Barger (recorded in his medical file as Albert Barger), the son of a different wet nurse. Earlier this year, textbook writer Richard Griggs weighed up all the evidence and concluded that the Barger story is the more credible, which would mean that Little Albert in fact died 2007 aged 87.
Chris Costner Sizemore is one of the most famous patients to be given the controversial diagnosis of multiple personality disorder, known today as dissociative identity disorder. Sizemore’s alter egos apparently included Eve White, Eve Black, Jane and many others. By some accounts, Sizemore expressed these personalities as a coping mechanism in the face of traumas she experienced in childhood, including seeing her mother badly injured and a man sawn in half at a lumber mill. In recent years, Sizemore has described how her alter egos have been combined into one united personality for many decades, but she still sees different aspects of her past as belonging to her different personalities. For example, she has stated that her husband was married to Eve White (not her), and that Eve White is the mother of her first daughter. Her story was turned into a movie in 1957 called The Three Faces of Eve (based on a book of the same name written by her psychiatrists). Joanne Woodward won the best actress Oscar for portraying Sizemore and her various personalities in this film. Sizemore published her autobiography in 1977 called I’m Eve. In 2009, she appeared on the BBC’s Hard Talk interview show.
One of the most famous patients in psychology, Reimer lost his penis in a botched circumcision operation when he was just 8 months old. His parents were subsequently advised by psychologist John Money to raise Reimer as a girl, “Brenda”, and for him to undergo further surgery and hormone treatment to assist his gender reassignment.
Money initially described the experiment (no one had tried anything like this before) as a huge success that appeared to support his belief in the important role of socialisation, rather than innate factors, in children’s gender identity. In fact, the reassignment was seriously problematic and Reimer’s boyishness was never far beneath the surface. When he was aged 14, Reimer was told the truth about his past and set about reversing the gender reassignment process to become male again. He later campaigned against other children with genital injuries being gender reassigned in the way that he had been. His story was turned into the book As Nature Made Him, The Boy Who Was Raised As A Girl by John Colapinto, and he is the subject of two BBC Horizon documentaries. Tragically, Reimer took his own life in 2004, aged just 38.
Scientists discover what our human ancestors were making inside the Wonderwerk Cave in South Africa 1.8 million years ago.
- Researchers find evidence of early tool-making and fire use inside the Wonderwerk Cave in Africa.
- The scientists date the human activity in the cave to 1.8 million years ago.
- The evidence is the earliest found yet and advances our understanding of human evolution.
One of the oldest activities carried out by humans has been identified in a cave in South Africa. A team of geologists and archaeologists found evidence that our ancestors were making fire and tools in the Wonderwerk Cave in the country's Kalahari Desert some 1.8 million years ago.
A new study published in the journal Quaternary Science Reviews from researchers at the Hebrew University of Jerusalem and the University of Toronto proposes that Wonderwerk — which means "miracle" in Afrikaans — contains the oldest evidence of human activity discovered.
"We can now say with confidence that our human ancestors were making simple Oldowan stone tools inside the Wonderwerk Cave 1.8 million years ago," shared the study's lead author Professor Ron Shaar from Hebrew University.
Oldowan stone tools are the earliest type of tools that date as far back as 2.6 million years ago. An Oldowan tool, which was useful for chopping, was made by chipping flakes off of one stone by hitting it with another stone.
An Oldowan stone toolCredit: Wikimedia / Public domain
Professor Shaar explained that Wonderwerk is different from other ancient sites where tool shards have been found because it is a cave and not in the open air, where sample origins are harder to pinpoint and contamination is possible.
Studying the cave, the researchers were able to pinpoint the time over one million years ago when a shift from Oldowan tools to the earliest handaxes could be observed. Investigating deeper in the cave, the scientists also established that a purposeful use of fire could be dated to one million years back.
This is significant because examples of early fire use usually come from sites in the open air, where there is the possibility that they resulted from wildfires. The remnants of ancient fires in a cave — including burned bones, ash, and tools — contain clear clues as to their purpose.
To precisely date their discovery, the researchers relied on paleomagnetism and burial dating to measure magnetic signals from the remains hidden within a sedimentary rock layer that was 2.5 meters thick. Prehistoric clay particles that settled on the cave floor exhibit magnetization and can show the direction of the ancient earth's magnetic field. Knowing the dates of magnetic field reversals allowed the scientists to narrow down the date range of the cave layers.
The Kalahari desert Wonderwerk CaveCredit: Michael Chazan / Hebrew University of Jerusalem
Professor Ari Matmon of Hebrew University used another dating method to solidify their conclusions, focusing on isotopes within quartz particles in the sand that "have a built-in geological clock that starts ticking when they enter a cave." He elaborated that in their lab, the scientists were "able to measure the concentrations of specific isotopes in those particles and deduce how much time had passed since those grains of sand entered the cave."
Finding the exact dates of human activity in the Wonderwerk Cave could lead to a better understanding of human evolution in Africa as well as the way of life of our early ancestors.
If you ask your maps app to find "restaurants that aren't McDonald's," you won't like the result.
- The Chinese Room thought experiment is designed to show how understanding something cannot be reduced to an "input-process-output" model.
- Artificial intelligence today is becoming increasingly sophisticated thanks to learning algorithms but still fails to demonstrate true understanding.
- All humans demonstrate computational habits when we first learn a new skill, until this somehow becomes understanding.
It's your first day at work, and a new colleague, Kendall, catches you over coffee.
"You watch the game last night?" she says. You're desperate to make friends, but you hate football.
"Sure, I can't believe that result," you say, vaguely, and it works. She nods happily and talks at you for a while. Every day after that, you live a lie. You listen to a football podcast on the weekend and then regurgitate whatever it is you hear. You have no idea what you're saying, but it seems to impress Kendall. You somehow manage to come across as an expert, and soon she won't stop talking football with you.
The question is: do you actually know about football, or are you imitating knowledge? And what's the difference? Welcome to philosopher John Searle's "Chinese Room."
The Chinese Room
Searle's argument was designed as a critique of what's called a "functionalist" view of mind. This is the philosophy that argues that our mind can be explained fully by what role it plays, or in other words, what it does or what "function" it has.
One form of functionalism sees the human mind as following an "input-process-output" model. We have the input of our senses, the process of our brains, and a behavioral output. Searle thought this was at best an oversimplification, and his Chinese Room thought experiment goes to show how human minds are not simply biological computers. It goes like this:
Imagine a room, and inside is John, who can't speak a word of Chinese. Outside the room, a Chinese person sends a message into the room in Chinese. Luckily, John has an "if-then" book for Chinese characters. For instance, if he gets <你好吗>, the proper reply is <我还好>. All John has to do is follow his instruction book.
The Chinese speaker outside of the room thinks they're talking to someone inside who knows Chinese. But in reality, it's just John with his fancy book.
What is understanding?
Does John understand Chinese? The Chinese Room is, by all accounts, a computational view of the mind, yet it seems that something is missing. Truly understanding something is not an "if-then" automated response. John is missing that sinking in feeling, the absorption, the bit of understanding that's so hard to express. Understanding a language doesn't work like this. Humans are not Google Translate.
And yet, this is how AIs are programmed. A computer system is programmed to provide a certain output based on a finite list of certain inputs. If I double click the mouse, I open a file. If you type a letter, your monitor displays tiny black squiggles. If we press the right buttons in order, we win at Mario Kart. Input — Process — Output.
Can imitation become so fluid or competent that it is understanding.
But AIs don't know what they're doing, and Google Translate doesn't really understand what it's saying, does it? They're just following a programmer's orders. If I say, "Will it rain tomorrow?" Siri can look up the weather. But if I ask, "Will water fall from the clouds tomorrow?" it'll be stumped. A human would not (although they might look at you oddly).
A fun way to test just how little an AI understands us is to ask your maps app to find "restaurants that aren't McDonald's." Unsurprisingly, you won't get what you want.
The Future of AI
To be fair, the field of artificial intelligence is just getting started. Yes, it's easy right now to trick our voice assistant apps, and search engines can be frustratingly unhelpful at times. But that doesn't mean AI will always be like that. It might be that the problem is only one of complexity and sophistication, rather than anything else. It might be that the "if-then" rule book just needs work. Things like "the McDonald's test" or AI's inability to respond to original questions reveal only a limitation in programming. Given that language and the list of possible questions is finite, it's quite possible that AI will be able to (at the very least) perfectly mimic a human response in the not too distant future.
What's more, AIs today have increasingly advanced learning capabilities. Algorithms are no longer simply input-process-output but rather allow systems to search for information and adapt anew to what they receive.
A notorious example of this occurred when a Microsoft chat bot started spouting bigotry and racism after "learning" from what it read on Twitter. (Although, this might just say more about Twitter than AI.) Or, more sinister perhaps, two Facebook chat bots were shut down after it was discovered that they were not only talking to each other but were doing so in an invented language. Did they understand what they were doing? Who's to say that, with enough learning and enough practice, an AI "Chinese Room" might not reach understanding?
Can imitation become understanding?
We've all been a "Chinese Room" at times — be it talking about sports at work, cramming for an exam, using a word we didn't entirely know the meaning of, or calculating math problems. We can all mimic understanding, but it also begs the question: can imitation become so fluid or competent that it is understanding.
The old adage "fake it, 'till you make it" has been proven true over and over. If you repeat an action enough times, it becomes easy and habitual. For instance, when you practice a language, musical instrument, or a math calculation, then after a while, it becomes second nature. Our brain changes with repetition.
So, it might just be that we all start off as Chinese Rooms when we learn something new, but this still leaves us with a pertinent question: when, how, and at what point does John actually understand Chinese? More importantly, will Siri or Alexa ever understand you?
With the rise of Big Data, methods used to study the movement of stars or atoms can now reveal the movement of people. This could have important implications for cities.
- A treasure trove of mobility data from devices like smartphones has allowed the field of "city science" to blossom.
- I recently was part of team that compared mobility patterns in Brazilian and American cities.
- We found that, in many cities, low-income and high-income residents rarely travel to the same geographic locations. Such segregation has major implications for urban design.
Almost 55 percent of the world's seven billion people live in cities. And unless the COVID-19 pandemic puts a serious — and I do mean serious — dent in long-term trends, the urban fraction will climb almost to 70 percent by midcentury. Given that our project of civilization is staring down a climate crisis, the massive population shift to urban areas is something that could really use some "sciencing."
Is urbanization going to make things worse? Will it make things better? Will it lead to more human thriving or more grinding poverty and inequality? These questions need answers, and a science of cities, if there was such a thing, could provide answers.
Good news. There already is one!
The science of cities
With the rise of Big Data (for better or worse), scientists from a range of disciplines are getting an unprecedented view into the beating heart of cities and their dynamics. Of course, really smart people have been studying cities scientifically for a long time. But Big Data methods have accelerated what's possible to warp speed. As "exhibit A" for the rise of a new era of city science, let me introduce you to the field of "human mobility" and a new study just published by a team I was on.
Credit: nonnie192 / 405009778 via Adobe Stock
Human mobility is a field that's been amped up by all those location-enabled devices we carry around and the large-scale datasets of our activities, such as credit card purchases, taxi rides, and mobile phone usage. These days, all of us are leaving digital breadcrumbs of our everyday activities, particularly our movements around towns and cities. Using anonymized versions of these datasets (no names please), scientists can look for patterns in how large collections of people engage in daily travel and how these movements correlate with key social factors like income, health, and education.
There have been many studies like this in the recent past. For example, researchers looking at mobility patterns in Louisville, Kentucky found that low-income residents tended to travel further on average than affluent ones. Another study found that mobility patterns across different socioeconomic classes exhibit very similar characteristics in Boston and Singapore. And an analysis of mobility in Bogota, Colombia found that the most mobile population was neither the poorest nor the wealthiest citizens but the upper-middle class.
These were all excellent studies, but it was hard to make general conclusions from them. They seemed to point in different directions. The team I was part of wanted to get a broader, comparative view of human mobility and income. Through a partnership with Google, we were able to compare data from two countries — Brazil and the United States — of relatively equal populations but at different points on the "development spectrum." By comparing mobility patterns both within and between the two countries, we hoped to gain a better understanding of how people at different income levels moved around each day.
Mobility in Brazil vs. United States
Socioeconomic mobility "heatmaps" for selected cities in the U.S. and Brazil. The colors represent destination based on income level. Red depicts destinations traveled by low-income residents, while blue depicts destinations traveled by high-income residents. Overlapping areas are colored purple.Credit: Hugo Barbosa et al., Scientific Reports, 2021.
The results were remarkable. In a figure from our paper (shown above), it's clear that we found two distinct kinds of relationship between income and mobility in cities.
The first was a relatively sharp distinction between where people in lower and higher income brackets traveled each day. For example, in my hometown of Rochester, New York or Detroit, the places visited by the two income groups (e.g., job sites, shopping centers, doctors' offices) were relatively partitioned. In other words, people from low-income and high-income neighborhoods were not mixing very much, meaning they weren't spending time in the same geographical locations. In addition, lower income groups traveled to the city center more often, while upper income groups traveled around the outer suburbs.
The second kind of relationship was exemplified by cities like Boston and Atlanta, which didn't show this kind of partitioning. There was a much higher degree of mixing in terms of travel each day, indicating that income was less of a factor for determining where people lived or traveled.
In Brazil, however, all the cities showed the kind of income-based segregation seen in U.S. cities like Rochester and Detroit. There was a clear separation of regions visited with practically no overlap. And unlike the U.S., visits by the wealthy were strongly concentrated in the city centers, while the poor largely traversed the periphery.
Data-driven urban design
Our results have straightforward implications for city design. As we wrote in the paper, "To the extent that it is undesirable to have cities with residents whose ability to navigate and access resources is dependent on their socioeconomic status, public policy measures to mitigate this phenomenon are the need of the hour." That means we need better housing and public transportation policies.
But while our study shows there are clear links between income disparity and mobility patterns, it also shows something else important. As an astrophysicist who spent decades applying quantitative methods to stars and planets, I am amazed at how deep we can now dive into understanding cities using similar methods. We have truly entered a new era in the study of cities and all human systems. Hopefully, we'll use this new power for good.
A small percentage of people who consume psychedelics experience strange lingering effects, sometimes years after they took the drug.