Big Think Interview With Bjarne Stroustrup

Question: What inspired you to create C++?

Bjarne Stroustrup: In the really old days, people had to write their code directly to work on the hardware.  They wrote load and store instructions to get stuff in and out of memory and they played about with bits and bytes and stuff.  You could do pretty good work with that, but it was very specialized.  Then they figured out that you could build languages fit for humans for specific areas.  Like they built FORTRAN for engineers and scientists and they built COBALT for businessmen.  

And then in the mid-'60s, a bunch of Norwegians, mostly Ole-Johan Dahl and Kristen Nygaard thought why can’t you get a language that sort of is fit for humans for all domains, not just linear algebra and business.  And they built something called SIMULA.  And that’s where they introduced the class as the thing you have in the program to represent a concept in your application world. So if you are a mathematician, a matrix will become a class, if you are a businessman, a personnel record might become a class, in telecommunications a dial buffer might become a class—you can represent just about anything as a class.  And they went a little bit further and represented relationships between classes; any hierarchical relationship could be done as a bunch of classes.  So you could say that a fire engine is a kind of a truck which is a kind of a car which is a kind of a vehicle and organize things like that.  This became know as object-oriented programming or also in some variance of it as data abstraction.   

And my idea was very simple: to take the ideas from SIMULA for general abstraction for the benefit of sort of humans representing things... so humans could get it with low level stuff, which at that time was the best language for that was C, which was done at Bell Labs by Dennis Ritchie.  And take those two ideas and bring them together so that you could do high-level abstraction, but efficiently enough and close enough to the hardware for really demanding computing tasks.  And that is where I came in.  And so C++ has classes like SIMULA but they run as fast as C code, so the combination becomes very useful. 

Question: What makes C++ such a widely used language?

Bjarne Stroustrup: If I have to characterize C++’s strength, it comes from the ability to have abstractions and have them so efficient that you can afford it in infrastructure.  And you can access hardware directly as you often have to do with operating systems with real time control, little things like cell phones, and so the combination is something that is good for infrastructure in general.  

Another aspect that’s necessary for infrastructure is stability.  When you build an infrastructure it could be sort of the lowest level of IBM mainframes talking to the hardware for the higher level of software, which is a place they use C++.  Or a fuel injector for a large marine diesel engine or a browser, it has to be stable for a decade or so because you can’t afford to fiddle with the stuff all the time.  You can’t afford to rewrite it, I mean taking one of those ships into harbor costs a lot of money.  And so you need a language that’s not just good at what it’s doing, you have to be able to rely on it being available for decades on a variety of different hardware and to be used by programmers over a decade or two at least.  C++ is now about three decades old.  And if that’s not the case, you have to rewrite your code all the time.  And that happens primarily with experimental languages and with proprietary commercial languages that change to finish... to meet fads.  

C++’s problem is the complexity partly, because we haven’t been able to clean it up.  There’s still code written in the '80s that are running and people don’t like their running codes to break.  It could cost them millions or more.

Question: What is the difference between C and C++?

Bjarne Stroustrup:  C has the basic mechanisms for expressing computations.  It has iterations, it has data types, it has functions and that’s it.  It doesn’t get into the game of expressing abstractions.  So if I want a matrix in C, I would have to say, I want an array and then I want a whole bunch of arrays and when I want to get the third element I have to program my way down to the third element of the fourth row or something like that.  

In C++ you can define something, call it a matrix, you define a subscript operator. If you don’t want rectangular matrixes you can get pentadiagonal matrices, triangular matrices that’s the kind of stuff that people... the expert in that field are interested in.  And you build that set of concepts and then you program it directly.  It’s easier to program, it’s easier to debug and sometimes it’s even easier to optimize for performance when you are expressing the notions at the higher level, at the level where an expert in the field operates, rather than trying to have the expert in the field, say the physicist, also be an expert in dealing with the hardware, with the computer.  There are fields still where you have to have a physicist and a computer scientist to get the work done, but we would like to minimize those because the skill sets are not the same.  So you want to lift from the hardware towards the human level.  

Question:
Is C obsolete?

Bjarne Stroustrup: This is somewhat controversial.  I think it is obsolete.  I think the languages should have been merged into one, so that C would have been a subset of C++ instead of nearly a subset of C++. And then people could have used whatever parts of the C++ tool set they needed.  As it is now, there are still enough incompatibilities that you have to remember which language you’re writing in, and I don’t think that is necessary.  It appears to be a historical necessity, but it is not a technical necessity.  

I’ve argued for compatibility, very strong compatibility, all the time.  I mean, I started working on C++ three doors down from Dennis Ritchie and we were talking every day.  The competition and tension that has been between C and C++ over the decades certainly didn’t come from home.  

Dennis Ritchie wrote that first book that Brian Carnahan, now I’ll have dinner with Brian next week.  We’re still very good friends as we’ve always been, but sometimes the programmers of the languages don’t quite see it that way.  It should have been one language.  

Question:
What is the future of programming?

Bjarne Stroustrup: There’ll be a unified language, but I’m not talking about programming language.  I’m talking more about a unified design style, a unified set of guidelines for how to combine the techniques.  I certainly hope that there will not be just one programming language.  I don’t think that’s at all likely and I would be sad because we would have lost a lot when we don’t have this tension between the languages that allows us to make progress.  I mean, the middle ages may have been very comfortable, but I don’t think I would have wanted to live there.  I like the diversity of ideas and the early ideas rubbing up against each other.  That’s how we make progress.  

Question:
Are you a proponent of open source software?

Bjarne Stroustrup: I am generally in favor of open source software with very few, if any, restrictions.  So I like the BSD Licenses.  I am not anti-commercial.  I would not put something into my license that would be a virus against commercial use.  

On the other hand, I don’t think that all software can or should be open because there’s a lot of sort of boring stuff that requires a high level of expertise to deal with.  I mentioned sort of the firmware layers and hardware and such. There’s very few people that really understand it.  You don’t get it maintained by a couple of volunteers because you need maybe five, 10 years experience in a particular field to do anything constructive and there’s lots and lots of software that’s not glamorous, that’s not interesting where you’ll simply not get the strength of the open source movement where you have lots of people, lots of contributions both individuals and organizations.  But there’s a lot of software where people just aren't interested.  And for that you need something else to keep it going and that’s usually the dollars that people get for doing the hard, sometimes boring, and sometimes advanced stuff.  So I think we always will have open source software and some closed.  

I guess I should add that C++ is used to both anyways, so.  I don’t have a... I don’t have a horse in that race, so I have both.

Question:
What are the five most important languages that programmers should know?

Bjarne Stroustrup: First of all, nobody should call themselves a professional if they only knew one language.  And five is a good number for languages to know reasonably well.  And then you’ll know a bunch, just because you’re interested because you’ve read about them because you’ve wrote a couple of little programs like [...].  But five isn’t a bad number.  Some of them book between three and seven.  

Let’s see, well my list is going to be sort of uninteresting because it’s going to be the list of languages that are best known and useful, I’m afraid.  Let’s see, C++, of course; Java; maybe Python for mainline work... And if you know those, you can’t help know sort of a little bit about Ruby and JavaScript, you can’t help knowing C because that’s what fills out the domain and of course C-Sharp.  But again, these languages create a cluster so that if you knew either five of the ones that I said, you would actually know the others.  I haven’t cheated with the numbers.  I rounded out a design space.  

It would be nice beyond that to know something quite weird outside it just to have an experience, pick one of the functional languages, for instance, that’s good to keep your head spinning a bit when it needs to.  I don’t have any favorites in that field.  There’s enough of them.  And, I don’t know, if you’re interested in high-performance numerical computation, you have to look at one of the languages there, but for most people that’s just esoteric.  

Question:
What are the most interesting trends in technology?

Bjarne Stroustrup:
Many things are interesting these days.  The interesting thing for me is the computers they have inside it.  And so when you see things, cars driving down there, planes flying and such, you can see them as a distributed computing system with wings or distributed computing system with wheels.  

I was over in Germany earlier in the year to speak to the German automotive software conference.  And I don’t know much about programming cars, but I got an invitation to go down and see how they programmed the BMWs, which is C++, so that'll be interesting.  Not that the other weren’t, but those are cool cars.  And I’ve worked with some people up at Lockheed Martin where they build the F-35s, the new fighter planes, which is C++ also.  So I get some insight in how things are used.  

And so at the bottom of all of this is the technology of the hardware, there’s the technology of the communications stuff between it, networking, and on the hardware side what has happened a lot is the multi-cores.  You get concurrent programming both from the physical distribution and for the... what’s under the chip themselves.  And this is interesting to me because my PhD topic was distribution and concurrency and such.  So I’ve been looking at that.  So that’s interesting.  

And a lot of the most interesting applications these days fall into that category. Take our cell phones: the last time I looked there are several processes.  Take a single... take an SLR camera, it’s got five or six processors in it and the cobble in the lenses, I mean, that's some interesting code there.  And so whether you think of that as technology or gadgets.  I think of them as a gadget.  I mean a cell phone or a new jetliner, they’re gadgets.  They are things you program and there’s programs in it, there’s techniques, lots of computers.  

What I haven’t talked about much about, and what I don’t think that much about is sort of the web kind of thing and the web business.  From my perspective, that’s somebody else’s business except when the scale becomes really huge.  So you have things like the Google search engines with C++ and I get interested, they get interested.  Facebook has recently turned to C++ because they needed the performance.  I guess in some way of saying here’s my contribution to dealing with global warming because if you can double the efficiency of those systems, you don’t need yet another server farm that uses as much energy as a small town.  

So my view is that there's software and there’s computers in just about everything and if you look at interesting things, well, you find it.

Question:
What is your work setup like?

Bjarne Stroustrup: I travel with a little laptop, the smallest real computer I can get.  So the 12-and-something screen and... but a decent processor speed.  And where I am, I plug it into a dock and I use two screens and such and then I network to any other resources I want.  If at all possible, I would like to make that machine smaller, but... or at least lighter.  Larger and lighter would be nice, but I don’t get it and too light if you’re stuck in a sardine-class seat on a plane, you still should be able to open up and write.  And you can’t do that with one of those bodybuilder’s editions.  So a smaller machine, convenient machine that you can carry with you and plug it into a bigger system network to more resources.  

My laptop is a Windows.  People always ask that.  And they can’t understand why it’s not my Linux.  Well, my Linux happens to sit on my desk and it talks to a traditional Unix through it.  So I use both on a daily basis.  It just happened that it’s easier to carry the Windows books around.

Question:
Do you prefer to work at night or during the day?

Bjarne Stroustrup: Real thinking, real work goes on fairly early in the day. And then in the evening, no, not really sort of thought work, not creative work.  I can polish stuff.  I’m not a night bird like that.  I like to think when I’m fresh.  

Question:
Do you listen to music while writing code?

Bjarne Stroustrup:  Quite often, yes.  I have a mixture of stuff on the computer; I just plug in the earphones and listen.  And there’s a mixture, there’s classical, there’s a bit of rock, there’s a bit of country.  It’s quite surprising what I can actually work with and what I can’t because it really does affect it.  There’s music that sort of takes over and you think about the music, rather than the code.  That’s no good.  And then there’s music that you don’t hear... that doesn’t help either.  And well, so well I found something that works, probably just for me, but I like some music.   

Question:
What advice do you have for C++ developers?

Bjarne Stroustrup: Most people don’t use C++ anywhere near as well as it could be used.  There are still a lot of people that are trying to use it as a glorified C, or as a slightly mutated Java or SmallTalk, and that’s not the right way of using it.  Go back, read one good book and see if you are up to date or if you happen to be stuck in the '80s or '90s.  We can do much better.  And then, next year, C++ OX will arrive, the next generation of C++ and it’ll support some of the modern programming styles that has been proven useful over the last decade or so—significantly better than C++ 98, which was the previous standard.  And so learn a little bit about it, look at what has been done and try to understand why it was done.  Things are just about to get much better.

Recorded August 12, 2010

Interviewed by Max Miller

A conversation with the creator of C++.

Scientists see 'rarest event ever recorded' in search for dark matter

The team caught a glimpse of a process that takes 18,000,000,000,000,000,000,000 years.

Image source: Pixabay
Surprising Science
  • In Italy, a team of scientists is using a highly sophisticated detector to hunt for dark matter.
  • The team observed an ultra-rare particle interaction that reveals the half-life of a xenon-124 atom to be 18 sextillion years.
  • The half-life of a process is how long it takes for half of the radioactive nuclei present in a sample to decay.
Keep reading Show less

Yale scientists restore brain function to 32 clinically dead pigs

Researchers hope the technology will further our understanding of the brain, but lawmakers may not be ready for the ethical challenges.

Still from John Stephenson's 1999 rendition of Animal Farm.
Surprising Science
  • Researchers at the Yale School of Medicine successfully restored some functions to pig brains that had been dead for hours.
  • They hope the technology will advance our understanding of the brain, potentially developing new treatments for debilitating diseases and disorders.
  • The research raises many ethical questions and puts to the test our current understanding of death.

The image of an undead brain coming back to live again is the stuff of science fiction. Not just any science fiction, specifically B-grade sci fi. What instantly springs to mind is the black-and-white horrors of films like Fiend Without a Face. Bad acting. Plastic monstrosities. Visible strings. And a spinal cord that, for some reason, is also a tentacle?

But like any good science fiction, it's only a matter of time before some manner of it seeps into our reality. This week's Nature published the findings of researchers who managed to restore function to pigs' brains that were clinically dead. At least, what we once thought of as dead.

What's dead may never die, it seems

The researchers did not hail from House Greyjoy — "What is dead may never die" — but came largely from the Yale School of Medicine. They connected 32 pig brains to a system called BrainEx. BrainEx is an artificial perfusion system — that is, a system that takes over the functions normally regulated by the organ. The pigs had been killed four hours earlier at a U.S. Department of Agriculture slaughterhouse; their brains completely removed from the skulls.

BrainEx pumped an experiment solution into the brain that essentially mimic blood flow. It brought oxygen and nutrients to the tissues, giving brain cells the resources to begin many normal functions. The cells began consuming and metabolizing sugars. The brains' immune systems kicked in. Neuron samples could carry an electrical signal. Some brain cells even responded to drugs.

The researchers have managed to keep some brains alive for up to 36 hours, and currently do not know if BrainEx can have sustained the brains longer. "It is conceivable we are just preventing the inevitable, and the brain won't be able to recover," said Nenad Sestan, Yale neuroscientist and the lead researcher.

As a control, other brains received either a fake solution or no solution at all. None revived brain activity and deteriorated as normal.

The researchers hope the technology can enhance our ability to study the brain and its cellular functions. One of the main avenues of such studies would be brain disorders and diseases. This could point the way to developing new of treatments for the likes of brain injuries, Alzheimer's, Huntington's, and neurodegenerative conditions.

"This is an extraordinary and very promising breakthrough for neuroscience. It immediately offers a much better model for studying the human brain, which is extraordinarily important, given the vast amount of human suffering from diseases of the mind [and] brain," Nita Farahany, the bioethicists at the Duke University School of Law who wrote the study's commentary, told National Geographic.

An ethical gray matter

Before anyone gets an Island of Dr. Moreau vibe, it's worth noting that the brains did not approach neural activity anywhere near consciousness.

The BrainEx solution contained chemicals that prevented neurons from firing. To be extra cautious, the researchers also monitored the brains for any such activity and were prepared to administer an anesthetic should they have seen signs of consciousness.

Even so, the research signals a massive debate to come regarding medical ethics and our definition of death.

Most countries define death, clinically speaking, as the irreversible loss of brain or circulatory function. This definition was already at odds with some folk- and value-centric understandings, but where do we go if it becomes possible to reverse clinical death with artificial perfusion?

"This is wild," Jonathan Moreno, a bioethicist at the University of Pennsylvania, told the New York Times. "If ever there was an issue that merited big public deliberation on the ethics of science and medicine, this is one."

One possible consequence involves organ donations. Some European countries require emergency responders to use a process that preserves organs when they cannot resuscitate a person. They continue to pump blood throughout the body, but use a "thoracic aortic occlusion balloon" to prevent that blood from reaching the brain.

The system is already controversial because it raises concerns about what caused the patient's death. But what happens when brain death becomes readily reversible? Stuart Younger, a bioethicist at Case Western Reserve University, told Nature that if BrainEx were to become widely available, it could shrink the pool of eligible donors.

"There's a potential conflict here between the interests of potential donors — who might not even be donors — and people who are waiting for organs," he said.

It will be a while before such experiments go anywhere near human subjects. A more immediate ethical question relates to how such experiments harm animal subjects.

Ethical review boards evaluate research protocols and can reject any that causes undue pain, suffering, or distress. Since dead animals feel no pain, suffer no trauma, they are typically approved as subjects. But how do such boards make a judgement regarding the suffering of a "cellularly active" brain? The distress of a partially alive brain?

The dilemma is unprecedented.

Setting new boundaries

Another science fiction story that comes to mind when discussing this story is, of course, Frankenstein. As Farahany told National Geographic: "It is definitely has [sic] a good science-fiction element to it, and it is restoring cellular function where we previously thought impossible. But to have Frankenstein, you need some degree of consciousness, some 'there' there. [The researchers] did not recover any form of consciousness in this study, and it is still unclear if we ever could. But we are one step closer to that possibility."

She's right. The researchers undertook their research for the betterment of humanity, and we may one day reap some unimaginable medical benefits from it. The ethical questions, however, remain as unsettling as the stories they remind us of.

Why compassion fades

A scientific look into a ubiquitous phenomenon.

Photo credit: Adrian Swancar on Unsplash
Sex & Relationships

One victim can break our hearts. Remember the image of the young Syrian boy discovered dead on a beach in Turkey in 2015? Donations to relief agencies soared after that image went viral. However, we feel less compassion as the number of victims grows. Are we incapable of feeling compassion for large groups of people who suffer a tragedy, such as an earthquake or the recent Sri Lanka Easter bombings? Of course not, but the truth is we aren't as compassionate as we'd like to believe, because of a paradox of large numbers. Why is this?

Compassion is a product of our sociality as primates. In his book, The Expanding Circle: Ethics, Evolution, and Moral Progress, Peter Singer states, "Human beings are social animals. We were social before we were human." Mr. Singer goes on to say, "We can be sure that we restrained our behavior toward our fellows before we were rational human beings. Social life requires some degree of restraint. A social grouping cannot stay together if its members make frequent and unrestrained attacks on one another."

Attacks on ingroups can come from forces of nature as well. In this light, compassion is a form of expressed empathy to demonstrate camaraderie.

Yet even after hundreds of centuries of evolution, when tragedy strikes beyond our community, our compassion wanes as the number of displaced, injured, and dead mounts.

The drop-off in commiseration has been termed the collapse of compassion. The term has also been defined in The Oxford Handbook of Compassion Science: ". . . people tend to feel and act less compassionately for multiple suffering victims than for a single suffering victim."

That the drop-off happens has been widely documented, but at what point this phenomenon happens remains unclear. One paper, written by Paul Slovic and Daniel Västfjäll, sets out a simple formula, ". . . where the emotion or affective feeling is greatest at N =1 but begins to fade at N = 2 and collapses at some higher value of N that becomes simply 'a statistic.'"

The ambiguity of "some higher value" is curious. That value may relate to Dunbar's Number, a theory developed by British anthropologist, Robin Dunbar. His research centers on communal groups of primates that evolved to support and care for larger and larger groups as their brains (our brains) expanded in capacity. Dunbar's is the number of people with whom we can maintain a stable relationship — approximately 150.

Some back story

Professor Robin Dunbar of the University of Oxford has published considerable research on anthropology and evolutionary psychology. His work is informed by anthropology, sociology and psychology. Dunbar's Number is a cognitive boundary, one we are likely incapable of breaching. The number is based around two notions; that brain size in primates correlates with the size of the social groups they live among and that these groups in human primates are relative to communal numbers set deep in our evolutionary past. In simpler terms, 150 is about the maximum number of people with whom we can identify with, interact with, care about, and work to protect. Dunbar's Number falls along a logorithmic continuum, beginning with the smallest, most emotionally connected group of five, then expanding outward in multiples of three: 5, 15, 50, 150. The numbers in these concentric circles are affected by multiple variables, including the closeness and size of immediate and extended families, along with the greater cognitive capacity of some individuals to maintain stable relationships with larger than normal group sizes. In other words, folks with more cerebral candlepower can engage with larger groups. Those with lesser cognitive powers, smaller groups.

The number that triggers "compassion collapse" might be different for individuals, but I think it may begin to unravel along the continuum of Dunbar's relatable 150. We can commiserate with 5 to 15 to 150 people because upon those numbers, we can overlay names and faces of people we know: our families, friends and coworkers, the members of our clan. In addition, from an evolutionary perspective, that number is important. We needed to care if bands of our clan were being harmed by raids, disaster, or disease, because our survival depended on the group staying intact. Our brains developed the capacity to care for the entirety of the group but not beyond it. Beyond our ingroup was an outgroup that may have competed with us for food and safety and it served us no practical purpose to feel sad that something awful had happened to them, only to learn the lessons so as to apply them for our own survival, e.g., don't swim with hippos.

Lapses

Imagine losing 10 family members in a house fire. Now instead, lose 10 neighbors, 10 from a nearby town, 10 from Belgium, 10 from Vietnam 10 years ago. One could almost feel the emotion ebbing as the sentence drew to a close.

There are two other important factors which contribute to the softening of our compassion: proximity and time. While enjoying lunch in Santa Fe, we can discuss the death toll in the French revolution with no emotional response but might be nauseated to discuss three children lost in a recent car crash around the corner. Conflict journalists attempt to bridge these geotemporal lapses but have long struggled to ignite compassion in their home audience for far-flung tragedies, Being a witness to carnage is an immense stressor, but the impact diminishes across the airwaves as the kilometers pile up.

A Dunbar Correlation

Where is the inflection point at which people become statistics? Can we find that number? In what way might that inflection point be influenced by the Dunbar 150?

"Yes, the Dunbar number seems relevant here," said Gad Saad, PhD., the evolutionary behavioral scientist from the John Molson School of Business at Concordia University, Montreal, in an email correspondence. Saad also recommended Singer's work.

I also went to the wellspring. I asked Professor Dunbar by email if he thought 150 was a reasonable inflection point for moving from compassion into statistics. He graciously responded, lightly edited for space.

Professor Dunbar's response:

"The short answer is that I have no idea, but what you suggest is perfect sense. . . . One-hundred and fifty is the inflection point between the individuals we can empathize with because we have personal relationships with them and those with whom we don't have personalized relationships. There is, however, also another inflection point at 1,500 (the typical size of tribes in hunter-gatherer societies) which defines the limit set by the number of faces we can put names to. After 1,500, they are all completely anonymous."

I asked Dunbar if he knows of or suspects a neurophysiological aspect to the point where we simply lose the capacity to manage our compassion:

"These limits are underpinned by the size of key bits of the brain (mainly the frontal lobes, but not wholly). There are a number of studies showing this, both across primate species and within humans."

In his literature, Professor Dunbar presents two reasons why his number stands at 150, despite the ubiquity of social networking: the first is time — investing our time in a relationship is limited by the number of hours we have available to us in a given week. The second is our brain capacity measured in primates by our brain volume.

Friendship, kinship and limitations

"We devote around 40 percent of our available social time to our 5 most intimate friends and relations," Dunbar has written, "(the subset of individuals on whom we rely the most) and the remaining 60 percent in progressively decreasing amounts to the other 145."

These brain functions are costly, in terms of time, energy and emotion. Dunbar states, "There is extensive evidence, for example, to suggest that network size has significant effects on health and well-being, including morbidity and mortality, recovery from illness, cognitive function, and even willingness to adopt healthy lifestyles." This suggests that we devote so much energy to our own network that caring about a larger number may be too demanding.

"These differences in functionality may well reflect the role of mentalizing competencies. The optimal group size for a task may depend on the extent to which the group members have to be able to empathize with the beliefs and intentions of other members so as to coordinate closely…" This neocortical-to-community model carries over to compassion for others, whether in or out of our social network. Time constrains all human activity, including time to feel.

As Dunbar writes in The Anatomy of Friendship, "Friendship is the single most important factor influencing our health, well-being, and happiness. Creating and maintaining friendships is, however, extremely costly, in terms of both the time that has to be invested and the cognitive mechanisms that underpin them. Nonetheless, personal social networks exhibit many constancies, notably in their size and their hierarchical structuring." Our mental capacity may be the primary reason we feel less empathy and compassion for larger groups; we simply don't have the cerebral apparatus to manage their plights. "Part of friendship is the act of mentalizing, or mentally envisioning the landscape of another's mind. Cognitively, this process is extraordinarily taxing, and as such, intimate conversations seem to be capped at about four people before they break down and form smaller conversational groups. If the conversation involves speculating about an absent person's mental state (e.g., gossiping), then the cap is three — which is also a number that Shakespeare's plays respect."

We cannot mentalize what is going on in the minds of people in our groups much beyond our inner circle, so it stands to reason we cannot do it for large groups separated from us by geotemporal lapses.

Emotional regulation

In a paper, C. Daryl Cameron and Keith B. Payne state, "Some researchers have suggested that [compassion collapse] happens because emotions are not triggered by aggregates. We provide evidence for an alternative account. People expect the needs of large groups to be potentially overwhelming, and, as a result, they engage in emotion regulation to prevent themselves from experiencing overwhelming levels of emotion. Because groups are more likely than individuals to elicit emotion regulation, people feel less for groups than for individuals."

This argument seems to imply that we have more control over diminishing compassion than not. To say, "people expect the needs of large groups to be potentially overwhelming" suggests we consciously consider what that caring could entail and back away from it, or that we become aware that we are reaching and an endpoint of compassion and begin to purposely shift the framing of the incident from one that is personal to one that is statistical. The authors offer an alternative hypothesis to the notion that emotions are not triggered by aggregates, by attempting to show that we regulate our emotional response as the number of victims becomes perceived to be overwhelming. However, in the real world, for example, large death tolls are not brought to us one victim at a time. We are told, about a devastating event, then react viscerally.

If we don't begin to express our emotions consciously, then the process must be subconscious, and that number could have evolved to where it is now innate.

Gray matter matters

One of Dunbar's most salient points is that brain capacity influences social networks. In his paper, The Social Brain, he writes: "Path analysis suggests that there is a specific causal relationship in which the volume of a key prefrontal cortex subregion (or subregions) determines an individual's mentalizing skills, and these skills in turn determine the size of his or her social network."

It's not only the size of the brain but in fact, mentalizing recruits different regions for ingroup empathy. The Stanford Center for Compassion and Altruism Research and Education published a study of the brain regions activated when showing empathy for strangers in which the authors stated, "Interestingly, in brain imaging studies of mentalizing, participants recruit more dorsal portions of the medial prefrontal cortex (dMPFC; BA 8/9) when mentalizing about strangers, whereas they recruit more ventral regions of the medial prefrontal cortex (BA 10), similar to the MPFC activation reported in the current study, when mentalizing about close others with whom participants experience self-other overlap."⁷

It's possible the region of the brain that activates to help an ingroup member evolved for good reason, survival of the group. Other regions may have begun to expand as those smaller tribal groups expanded into larger societies.

Rabbit holes

There is an eclectic list of reasons why compassion may collapse, irrespective of sheer numbers:

(1) Manner: How the news is presented affects viewer framing. In her book, European Foreign Conflict Reporting: A Comparative Analysis of Public News, Emma Heywood explores how tragedies and war are offered to the viewers, which can elicit greater or lesser compassionate responses. "Techniques, which could raise compassion amongst the viewers, and which prevail on New at Ten, are disregarded, allowing the victims to remain unfamiliar and dissociated from the viewer. This approach does not encourage viewers to engage with the sufferers, rather releases them from any responsibility to participate emotionally. Instead compassion values are sidelined and potential opportunities to dwell on victim coverage are replaced by images of fighting and violence."

(2) Ethnicity. How relatable are the victims? Although it can be argued that people in western countries would feel a lesser degree of compassion for victims of a bombing in Karachi, that doesn't mean people in countries near Pakistan wouldn't feel compassion for the Karachi victims at a level comparable to what westerners might feel about a bombing in Toronto. Distance has a role to play in this dynamic as much as in the sound evolutionary data that demonstrate a need for us to both recognize and empathize with people who look like our communal entity. It's not racism; it's tribalism. We are simply not evolved from massive heterogeneous cultures. As evolving humans, we're still working it all out. It's a survival mechanism that developed over millennia that we now struggle with as we fine tune our trust for others.

In the end

Think of compassion collapse on a grid, with compassion represented in the Y axis and the number of victims running along the X. As the number of victims increases beyond one, our level of compassion is expected to rise. Setting aside other variables that may raise compassion (proximity, familiarity etc.), the level continues to rise until, for some reason, it begins to fall precipitously.

Is it because we've become aware of being overwhelmed or because we have reached max-capacity neuron load? Dunbar's Number seems a reasonable place to look for a tipping point.

Professor Dunbar has referred to the limits of friendship as a "budgeting problem." We simply don't have the time to manage a bigger group of friends. Our compassion for the plight of strangers may drop of at a number equivalent to the number of people with who we can be friends, a number to which we unconsciously relate. Whether or not we solve this intellectual question, it remains a curious fact that the larger a tragedy is, the more likely human faces are to become faceless numbers.