Reality is far stranger than fiction.
- Black holes are stranger than fiction, especially when we explore the weird effects of watching someone or something fall into one.
- Rotating black holes may be traversable if the physics as we understand it holds.
- To discuss the physics, we explore a fictional tale with a grand ending.
What happens when someone falls into a black hole? If you are the unfortunate soul being gobbled up, things don't look too bad until they turn really bad. Unless, there is an outlet through a wormhole. And you are really lucky.
The fictional story below — an abridged version of one published in my 2002 book The Prophet and the Astronomer explains why. Since we now know that black holes exist and that even Jeff Bezos can fly into outer space, it is only a matter of time before humans fly into black holes — albeit a very, very long time from now: the nearest black hole to Earth (as of now) lies a "mere" 1,500 light-years away.
But first, a refresher. In his general theory of relativity, Albert Einstein equated gravity with the curvature of space around a massive body. The effect is quite negligible for light masses but becomes important for massive stars and even more so for very compact massive objects such as neutron stars, whose gravity is 100,000 times stronger than at the sun's surface. Distortions of space caused by a larger mass (stars) will cause small moving masses (planets) to deviate from what Newtonian gravity predicts. Another remarkable consequence of Einstein's theory of gravity is the slowing down of clocks in strong gravitational fields: strong gravity bends space and slows down time.
Now, on with the story.
In my young days, I traveled from planet to planet looking for old spaceship parts. It was in one of my travels in search of a rare gyroscope for a 2180 Mars Lander that I found "Mr. Ström's Rocket Parts," an enormous hanger littered with mountains of space garbage. While I was consulting the store's virtual stock-scanning device to search for the gyroscope, Mr. Ström himself came to greet me. He was famous throughout the galaxy for claiming to have come closer than anyone to a black hole, a story that, to most, was just that — a story.
Like many before me, I asked Mr. Ström to tell me his story. After hesitating a while, he gave in.
"I was commander of a fleet built to explore the complex astrophysical X-ray source known as Cygnus X-1," he started. "Since the 1970s, over three millennia ago, this was suspected to be a binary star system 6,000 light-years from Earth. The two members of the binary system, thought to be a blue giant star about 20-30 solar masses and a black hole about 7-15 solar masses, orbited so close together that the black hole frantically sucked matter from his huge companion into a spiraling oblivion. This mad swirling heated the in-falling stellar matter to enormous temperatures, producing the X-rays astronomers on Earth observed. Even though the data indicated that the smaller object of the pair had a mass much larger than the maximum mass for neutron stars, it was still not clear if it was a black hole. Since other attempts to identify it had failed, the League of Planets decided that the only way to know for sure was to go there.
"The fleet consisted of three vessels, each under the command of a Ström, a great honor to my family. I led the vessel named CX1, my middle brother led CX2, and the youngest led CX3. I will spare you the details of how the mission was prepared, and how, after many problems with our hyper-relativistic plasma drive, we finally arrived to within one light-month of our destination. Through our telescopes we could see an enormous hot blue star being drained by an invisible hole in space.
"We were instructed to fly single file toward the black hole, keeping a very large distance from each other; my younger brother first, my mid-brother second, and me last. We knew that, from a large distance, a black hole behaves like any other massive object, as the differences general relativity predicted happen only fairly close to it. We also knew that every black hole has an imaginary limiting sphere around it known as the 'event horizon,' which marks the distance from which not even light could escape.
"My young brother's ship, the CX3, was to approach the hole, sending us periodic light flashes with a given frequency; we were to follow at a distance, measuring the frequency of the radiation emitted by my brother's ship as well as the time interval between the pulses, and then compare them with the theoretical predictions for gravitational redshift and time delay. The three vessels plunged to a distance of 10,000 kilometers from the hole; while CX1 and CX2 hovered at that distance, my brother closed in to 100 kilometers from the hole. He was instructed to send us infrared radiation, but we detected only radio waves. The gravitational redshift formula was indeed correct. Furthermore, the intervals between two pulses increased quite perceptibly; time was flowing slower for my brother, as viewed from our distant ships. He plunged to the dangerously close distance of ten kilometers from the hole, only seven from the event horizon; this was the closest distance the ship could stand, due to the enormous tidal forces around the hole, which stretch everything into spaghetti. (Numbers assume a one-solar-mass black hole.)
"From that close orbit, my brother was to send pulses of visible light, but all we detected were (invisible) radio waves; we could not see my brother's ship any longer, and I started to feel very uneasy. The theory was correct: a ship falling into a black hole will become invisible to a more distant ship (us) due to the red shifting of light. That also meant that we would never be able to see a star collapsing into a black hole, as it will become invisible before it meets its end. A related effect was the slowing of time. As my younger brother approached the black hole, the radiation pulses were arriving at increasingly long intervals. Thus, not only could we not see him, but we would also have to wait an enormous amount of time to receive any message from him. This confirmed the prediction that for a distant observer, the collapse of a star would take forever. Of course, for the unlucky traveler that freefalls into the black hole, nothing unusual with the passage of time would happen, as explained by the equivalence principle: gravity is neutralized in free fall. Unfortunately, his body would be horribly stretched.
"The turbulence and steady bombardment of matter swirling around the black hole caused my brother's spaceship to drift uncontrollably into the maelstrom. I had to try to rescue him. After all, this was a rotating black hole, and the theory predicted that instead of a crushing singularity at its center, there should be a wormhole connected to another point in the universe. A desperate maneuver to be sure.
"My mid-brother waited in a safe distant orbit around the black hole. As I plunged in, the whirling of space dragged me in as water into a drain. The combination of enormous gravitational pull and furious bombardment of radiation and particles took a toll on my ship; but its fuselage miraculously — what else could it be but a miracle? — survived, as I did, thanks to the once controversial anti-crunch shield. Outside, space seemed to convulse into infinitely many coexisting shapes. Inside a black hole, I realized, reality had no boundaries.
"I felt an enormous push, as if the spaceship was being coughed up by a giant. I must have remained unconscious for quite a while. When I looked into a mirror, I could hardly believe what I saw; my hair had turned completely white, and my face was covered with wrinkles I didn't have moments (moments?) ago. I checked my location in the computer and realized that, somehow, I re-emerged 2,000 light-years away from Cygnus X-1. The only possible explanation was that I traveled through a wormhole, which somehow was kept open inside the black hole and was tossed out by a white hole at a faraway point in space."
Apart from the sequence of facts inside the black hole — where we know very little — the rest is what we should expect from watching someone fall into a black hole. Reality, for these cosmic maelstroms, is definitely stranger than fiction.
Once a week.
Subscribe to our weekly newsletter.
One single plot of data embodies the most profound thing we know about the stars.
- Just like people, stars are born, grow old, and die.
- Astrophysicists figured this out by studying stars' brightness and temperatures.
- This data is beautifully and powerfully captured in the Hertzsprung-Russell (HR) diagram.
Stars are just like us! I don't mean that in a "Dua Lipa likes to wear pajamas when she shops for milk" kind of way. What I'm talking about are life cycles.
Stars are born, live, and die. Just like us. That's a pretty amazing fact in and of itself when you consider that for most of human history, folks thought stars were eternal and unchanging. Instead, stars change over the course of time, just like we do.
Last week, we took a first look at the Hertzsprung-Russell diagram (HR diagram), which is how astronomers discovered that stars have life cycles. I called it "the most important graph in astrophysics." It's so important that it deserves another look today. So, let's take a deeper dive to see how it reveals the patterns of stellar biography.
Explaining the HR diagram
An HR diagram is a plot of stellar luminosity (energy output) on the vertical axis and stellar surface temperature on the horizonal axis. The major focus of the last post was the Main Sequence, which is the dense diagonal band that appears when you take a mess of stars and drop them onto this kind of plot.
Why was the appearance of the Main Sequence so important? An HR diagram is really a snapshot of a big collection of stars taken at random points in their lives. Say we go out one night and point our telescope at 100,000 stars and measure their luminosity ("L") and their temperature ("T"). Based on those measured values of L and T, we drop each star onto their appropriate location in the diagram.
This is a lot like going to the mall and measuring the height (H) and weight (W) of random people you run into and then plotting the results on a Height vs. Weight plot. What do you think you would see if you collected H and W for 1000 random human beings.? The majority of your points would show humans with heights between 5 and 6 feet tall and weights between 100 and 250 pounds. Why? Because that's the range of height and weight for middle-aged adults — and we all spend most of our lives in middle age (say, between 25 and 65).
But there are exceptions. You would also expect to see a cluster of really small heights and weights for babies and little kids. In addition, you would expect some medium heights and lower weights representing old people. But most people would fall on a band in your plot of H and W between (5 feet, 100 pounds) and (6 feet, 250 pounds).
Main Sequence: A star's middle age
So, what then is the Main Sequence? It's the place where the stars "live" on the HR diagram in their middle age. Boom! So simple and yet so profound. Stars change. Their properties change. They have life cycles, and that means that the place we expect to find most of them (in terms of their changing properties on the HR diagram) is where they spend most of their lives — that is, their middle ages.
What defines a star's long middle age? It's the period when they are burning hydrogen gas as a fuel for fusion. Stars support themselves against the gravitational crush of their own weight via thermonuclear fusion in their cores. Fusion occurs when light elements get squeezed into heavier elements, releasing a little energy in the process (via E = mc2). Since hydrogen is the most abundant and lightest element in the universe, it's the first gas that gets fused in a star's core. As long as stars have hydrogen to burn, you will find them on the Main Sequence.
Only after the hydrogen fuel for fusion runs out does a star face a kind of late-life crisis in which it must change its interior conditions to get the next element, helium, to start fusing. But once that happens, the star "moves" off the Main Sequence.
Another question is, "Why is the Main Sequence a diagonal band running from high L and T to low L and T?" The answer lies in the physics of nuclear fusion. High mass stars have a high gravitational crush in their centers, which raises their core temperatures. Nuclear fusion rates are crazy sensitive to temperature. That means massive stars burn their hydrogen hot and fast, producing huge energy outputs. So, the Main Sequence is also a sequence in stellar mass. The high-mass stars are up in the high L and T corner, while the low-mass stars are in the low L and T corner.
The rest of the HR diagram
What about those other collections of stars on the HR diagram? What are the "giants" and the "dwarves" telling us about the life cycles of stars? We'll have to pick up that tale next time.
If you truly want to understand modern astrophysics, knowing how to read this graph is essential.
- The invention of spectroscopy and photography converted astronomy into astrophysics.
- With these new tools, astrophysicists gathered untold amounts of data on stars.
- When these stars were plotted on a graph, amazing patterns emerged.
Like people, stars are born, live, and then die. But how do scientists know that stars are born and die? Where did that knowledge come from? After all, for most of human history, many people thought that stars were eternal and unchanging. What was it that set astronomers on the path to seeing stars as something bound by time and change? The answer comes in the form of a simple and beautiful diagram first made 100 or so years ago.
Astronomy becomes astrophysics
By the end of the 19th century, new tools were being added to telescopes that turned astronomy into astrophysics. The most important of these was the spectrograph, which let astronomers see how much energy a star emitted at different wavelengths (or colors). It's also what allowed astrophysicists to conclude definitively that the sun is a star.
Photography also revolutionized the field by providing a permanent record of observations so that they could be compared and correlated with other photographed observations. Using the spectrograph and photographic plates, astrophysicists began to amass a huge storehouse of data on stars.
At observatories in Europe and the U.S., the spectra of hundreds of thousands of stars were taken. Later these spectra were sorted into different classification "bins" based on patterns found in the way that stars emitted their energy at different wavelengths. (It's worth noting that this sorting work was both challenging and exhausting and, in many cases, was done by bright young women who were not allowed to be formal astronomy students.) After the work was done, the classification bins for the spectra eventually were recognized to be associated with the star's surface temperature.
Photographic data also allowed the stars to be sorted in another way, in this case, based on their brightness, which was a measure of the total energy they radiated into space.
What all this means is by the first years of the 20th century, astronomers had something new and tremendously valuable: a big, hard-won treasure trove of stellar data giving each star's temperature and brightness. Now the question was what to do with it.
The Hertzsprung-Russell diagram
The simple answer to this kind of question in science was the same then as it is now: make a plot and see what happens.
Each of about 100,000 stars was placed on a two-dimensional graph. The temperature was on the horizontal axis, and the brightness was on the vertical axis. That's basically what Danish astronomer Ejnar Hertzsprung and American astronomer Henry Russell each did, independently of each other, to create what is now called the Hertzsprung-Russell (HR) diagram.
So, what does "interesting" in this kind of plot mean? Well, I can tell you what would not be interesting. If stars just appeared randomly on the plot — as if someone had taken a shotgun to it — that would not be interesting. It would mean that there was no correlation between brightness and temperature.
Thankfully, a shotgun patten is definitely not what astronomers saw in the HR diagram. Instead, most of the stars collected on a thick diagonal line stretching from one corner of the plot to the other. Astronomers called this line the Main Sequence. There were also other places, outside the Main Sequence, where the stars collected. What astronomers were seeing in their data was the unmistakable indication of a hidden order.
The patterns in the HR diagram told astrophysicists that something was going on inside stars. The Main Sequence, for example, told astrophysicists that a strong link must exist between the energy stars pumped into space and how hot their surfaces got. That link implied that there was hidden physics tying stellar energy output and stellar surface temperature together in a powerful chain of cause and effect. If they could understand that chain, they could answer the 2500-year-old holy grail of astronomy questions — what makes stars shine?
It would take another 50 years after the first HR diagrams appeared before astrophysicists could really see how the Main Sequence and other patterns were a direct consequence of stellar physics in the form of stellar aging over time. For that, they would need the invention of nuclear physics and a theory of thermonuclear fusion. We'll take up that story in another post.
For today, it's enough to marvel at how the simple act of throwing a bunch of stars onto a plot unveiled a hidden pattern that could not have been seen otherwise. That pattern was a clue, a hint of which direction to face, spurring scientists forward eventually to unlocking the mystery of the stars.
We live in a world dominated by science, but most people don't understand its most essential characteristic: establishing standards of evidence to keep us from getting fooled by our own biases and opinions.
- Maintaining standards of evidence is the most important and least appreciated idea in science.
- Modern science was established in the late Renaissance when networks of researchers began working out best practices for linking evidence with conclusions.
- In the face of science denial and attempts to create a post-truth society, we have to protect the primacy of standards of evidence in science and society.
I talk a lot about science to people who are not scientists. It's generally a lot of fun because most folks are science-curious even if they don't think about it a lot on their own time. But whether I'm talking about alien life, black holes, or the weirdnesses of quantum mechanics, there is always one really important idea that I try to get across that generally no one is interested in:
Standards of evidence. It's the most important boring idea in the universe.
Networks of scientists led to scientific societies
The development of modern science was a long, slow process that required input from most of the world's cultures ranging from ancient Greece and medieval Islam to India and China and eventually Renaissance Europe.
One of the most critical elements in Europe was the gradual build-up of international communities of scholars. While we usually think of science as being driven forward through the inspiration of one singular genius after another, that's only part of the story. For every Galileo and Newton there were hundreds of people you never heard of. They formed a network of thinkers and tinkerers writing letters to each other and making visits across the continent. In this way, they exchanged notes on things like the best way to carry out an experiment on boiling liquids or a new way to consider the mathematics of problems in celestial mechanics.
Unless you are a scientist, you probably have very little idea of how science knows what it knows, or even more important, how it knows what it doesn't know.
While they might not have known it at the time, what these scholars were also doing was setting up the foundations for an international order of scientific knowledge that would rest upon mutually agreed standards of evidence.
Eventually these networks became formalized. Scientific academies started popping up in places like Italy where the Academy of the Mysteries of Nature was founded in Naples in 1560. Later the Royal Society in England, formally known as the The Royal Society of London for Improving Natural Knowledge, was established in 1660. The French Academy of Sciences was formed just six years later. Over the years, these institutions and others would lead the way in establishing "best practices" for how to carry out scientific research and how to make sure that the conclusions a scientist drew from that research were supported by the evidence.
Scientific societies led to standards of evidence
Credit: Karlis Reimanis via Unsplash
I'm telling you this not because I think the history is so cool (though it is). Instead, what matters is seeing how the idea of standards of evidence was born in its scientific form. It came from people arguing in public over what should count as public facts or better yet public knowledge. Science didn't drop out of the sky fully formed. It was, and is, the fruit of a very human, very collective effort. The goal of that effort was to determine the best way to ask nature questions and ensure that you're getting correct answers.
This was not, by the way, a smooth process. There were lots of wrong turns in figuring out what counted as meaningful evidence and what was just another way of getting fooled. But over time, people figured out that there were standards for how to set up an experiment, how to collect data from it, and how to interpret that data. These standards now include things like isolating the experimental apparatus from spurious environmental effects, understanding how data collection devices respond to inputs, and accounting for systematic errors in analyzing the data. There are, of course, many more.
In this way, scientists figured out which standards were useful in linking evidence to conclusions.
Why standards matter
Science is now the most powerful force shaping human life. Without it, there could never be seven billion of us living on the planet at the same time. It has shaped and reshaped how we eat, how we travel, how we deal with sickness, how we communicate, and how we go to war. It is also how we are pushing Earth into new and dangerous (for us) climate states. But despite all this ubiquity and power, unless you are a scientist, you probably have very little idea of how science knows what it knows, or even more important, how it knows what it doesn't know.
Most of us don't understand what it means to have standards of evidence or how these standards get applied. That means that we can't see how the same methods that gave us our cell phones also gave us our understanding of climate change. When a pandemic hits, we can't see how the science is going to be an evolving process as those standards of evidence get used to sort through the firehose of real-time data. And when it comes to things like UFOs or "Ancient Aliens," we won't see that holding fast to those standards is the only thing that can keep us from being fooled by a conclusion that we may want to be true as opposed to accepting the one that actually is true.
Admittedly, standards of evidence is not the most thrilling topic in the world. But it very well may be the most important.
Reduction is an approach that has been successful in science but is not itself synonymous with "science."
- Reductionism — the philosophical position that all phenomena can be explained by interactions between particles — is not inherently a part of the scientific method.
- For example, most biological processes cannot be explained by appealing to quarks.
- Those who study complex phenomena, such as condensed matter physicists, often reject reductionism and embrace its alternative, known as emergence.
Fundamentally, science is a path to understanding the world. It's a way to enter a dialogue with nature. Using the methods of science, certain kinds of questions — meaning questions that are posed in a particular kind of way — can get answered. Science is so successful at this question-answering task, however, that other ideas often get attached to it in a philosophical game of pin-the-tail-on-the-donkey. It's in this often unconscious association that ideas that are not fundamentally part of the method we call science get tagged as "what science says."
Reductionism vs. emergence
One of these ride-along ideas is reductionism. Reductionism is a philosophical stance that claims that any explanation about the universe must reduce to the fundamental entities of physics, things like quarks and electrons.
Not long ago, I wrote an article about why reductionism is not what science "says" about the world. I introduced reductionism's philosophical alternative, known as emergence, and I promised to write more and continue unpacking the tension between these views. Today, as promised, we will dig a bit deeper into this ancient and critical question.
My post sparked some lovely conversations. Some folks agreed with what I was saying; others most certainly did not. That was pretty awesome from my point of view because conversations among people who disagree are the only way each side can learn more about their own points of view (and maybe have their minds changed). Based on that discussion, astronomer Jason Wright penned a cogent post on his perspective on reductionism. Later, Wright's post led to a really lovely piece by philosophers Thomas Metcalf and Chelsea Harami that laid out the reductionism vs. emergence debate. Those articles are worth reading.
Here's a summary of the debate: Emergence argues that, sometimes, when the fundamental entities of physics combine, they create fundamentally new kinds of behaviors and structures. Emergence argues that nature invents new things at higher levels of structure (hence, my claim that you are more than your atoms).
Philosophers then go on to distinguish between weak and strong emergence. Weak emergence sees all causes still being tracked back to the atoms, while strong emergence wants to claim that something truly new emerges at the higher levels. Also, much of this debate happens within a philosophical framework called "physicalism," which claims that everything that exists is, well, physical.
Conscious experience, and to a lesser degree life, are often identified as Ur-examples of strong emergence. Conscious experience is so weird that you can see why it's easy to tag it as an emergent phenomenon. But what about emergence — either strong or weak — in plain old physics?
Emergence in condensed matter physics
Credit: agsandrew via Adobe Stock
Perhaps unsurprisingly, some philosophers argue "yes," and others argue "no." For those with a physics background, I highly recommend the book Why More Is Different: Philosophical Issues in Condensed Matter Physics and Complex Systems for some good articles on the subject.
One of the most interesting things about the emergence-vs-reductionism debate is who takes which side. It is most definitely worth noting that some of the most emphatic voices arguing for stronger versions of emergence come from condensed matter physicists. This is the field that studies solid matter (and liquids too). In fact, the whole debate got started in 1972 with a paper by Noble Prize-winning physicist Philip Anderson called "More is Different," in which he wrote:
"The reductionist hypothesis does not by any means imply a 'constructionist' one: The ability to reduce everything to simple fundamental laws does not imply the possibility to start from those laws and reconstruct the universe. (...) At each level of complexity entirely new properties appear, and the understanding of the new behaviors requires research which I think is as fundamental in its nature as any other."
Later Robert Laughlin, also a condensed matter physicist, wrote a book called A Different Universe, in which he argued that attempts to apply the fundamental equations of quantum mechanics to any system with more than 100 particles leaves you with something that can only be solved with God's computer (i.e., it can't really be solved). Based on this, he argued that you really can't derive the higher levels of structure from the lower levels and that there do exist higher order, emergent principles that are required to understand the world.
Another Nobel Prize winning condensed matter physicist Anthony Leggett has also weighed in on this question, writing:
"No significant advance in the theory of matter in bulk has ever come about through derivation from microscopic principles. (...) I would confidently argue further that it is in principle and forever impossible to carry out such a derivation. (...) The so-called derivations of the results of solid-state physics from microscopic principles alone are almost all bogus, if 'derivation' is meant to have anything like its usual sense."
Leggett goes farther:
"I claim then that the important advances in macroscopic physics come essentially in the construction of models at an intermediate or macroscopic level, and that these are logically (and psychologically) independent of microscopic physics."
Reductionism doesn't work
What is interesting to me is that it's the people who actually do the work in studying the higher levels of structure that are often the ones most convinced that reductionism doesn't really work. Now physicists are not philosophers, which means that they are not trained to see the ontological and epistemological meaning of the theories they create. But I do think it's telling that those closest to complexity have the deepest intuitions of and commitments to emergence.
Climate change and artificial intelligence pose substantial — and possibly existential — problems for humanity to solve. Can we?
- Just by living our day-to-day lives, we are walking into a disaster.
- Can humanity wake up to avert disaster?
- Perhaps COVID was the wake-up call we all needed.
Does humanity have a chance for a better future, or are we just unable to stop ourselves from driving off a cliff? This was the question that came to me as I participated in a conference entitled The Future of Humanity hosted by Marcelo's Institute for Cross-Disciplinary Engagement. The conference hosted an array of remarkable speakers, some of whom were hopeful about our chances and some less so. But when it came to the dangers facing our project of civilization, two themes appeared in almost everyone's talks.
And here's the key aspect that unifies those dangers: we are doing it to ourselves.
The problem of climate change
The first existential crisis that was discussed was, as you might guess, climate change. Bill McKibben, the journalist and now committed activist who first began documenting the climate crisis as far back as the 1980s, gave us a history of humanity's inability to marshal action even in the face of mounting scientific evidence. He spoke of the massive, well-funded disinformation efforts paid for by the fossil fuel industry to keep that action from being taken because it would hurt their bottom lines.
It's not like some alien threat has arrived and will use a mega-laser to drive the Earth's climate into a new and dangerous state. Nope, it's just us — flying around, using plastic bottles, and keeping our houses toasty in the winter.
Next Elizabeth Kolbert, one of America's finest non-fiction writers, gave a sobering portrait of the state of efforts that attempt to deal with climate change through technological fixes. Based on her wonderful new book, she looked at the problem of control when it comes to people and the environment. She spoke of how often we get into trouble when we try to exert control over things like rivers or animal populations only to find that these efforts go awry due to unintended consequences. This requires new layers of control which, in turn, follow the same path.
Credit: Jo-Anne McArthur via Unsplash
At the end of the talk, she focused on attempts to deal with climate change through new kinds of environmental controls with the subtext being that we are likely to run into the same cycle of unintended consequences and attempts to repair the damage. In a question-and-answer period following her talk, Kolbert was decidedly not positive about the future. Because she had looked so deeply into the possibilities of using technology to get us out of the climate crisis, she was dubious that a tech fix was going to save us. The only real action that will matter, she said, is masses of people in the developed would reducing their consumption. She didn't see that happening anytime soon.
The problem of artificial intelligence
Another concern was over artificial intelligence. Here the concern was not so much existential. By this, I mean the speakers were not fearful that some computer was going to wake up into consciousness and decide that the human race needed to be enslaved. Instead, the danger was more subtle but no less potent. Susan Halpern, also one of our greatest non-fiction writers, gave an insightful talk that focused on the artificial aspect of artificial intelligence. Walking us through numerous examples of how "brittle" machine learning algorithms at the heart of modern AI systems are, Halpern was able to pinpoint how these systems are not intelligent at all but carry all the biases of their makers (often unconscious ones). For example, facial recognition algorithms can have a hard time differentiating the faces of women of color, most likely because the "training data sets" the algorithms were taught were not representative of these human beings. But because these machines supposedly rely on data and "data don't lie," these systems get deployed into everything from making decisions about justice to making decisions about who gets insurance. And these are decisions that can have profound effects on people's lives.
Then there was the general trend of AI being deployed in the service of both surveillance capitalism and the surveillance state. In the former, your behavior is always being watched and used against you in terms of swaying your purchasing decisions; in the latter, you are always being watched by those in power. Yikes!
The banality of danger
In listening to these talks I was struck by how mundane the sources of these dangers were when it comes to day-to-day life. Unlike nuclear war or some lone terrorist building a super-virus (threats that Sir Martin Rees eloquently spoke of), when it comes to the climate crisis and an emerging surveillance culture, we are collectively doing it to ourselves through our own innocent individual actions. It's not like some alien threat has arrived and will use a mega-laser to drive the Earth's climate into a new and dangerous state. Nope, it's just us — flying around, using plastic bottles, and keeping our houses toasty in the winter. And it's not like soldiers in black body armor arrive at our doors and force us to install a listening device that tracks our activities. Nope, we willingly set them up on the kitchen counter because they are so dang convenient. These threats to our existence or to our freedoms are things that we are doing just by living our lives in the cultural systems we were born into. And it would take considerable effort to untangle ourselves from these systems.
So, what's next then? Are we simply doomed because we can't collectively figure out how to build and live with something different? I don't know. It's possible that we are doomed. But I did find hope in the talk given by the great (and my favorite) science fiction writer Kim Stanley Robinson. He pointed to how different eras have different "structures of feeling," which is the cognitive and emotional background of an age. Robinson looked at some positive changes that emerged in the wake of the COVID pandemic, including a renewed sense that most of us recognize that we're all in this together. Perhaps, he said, the structure of feeling in our own age is about to change.
Let us hope, and where we can, let us act.