- The history of AI shows boom periods (AI summers) followed by busts (AI winters).
- The cyclical nature of AI funding is due to hype and promises not fulfilling expectations.
- This time, we might enter something resembling an AI autumn rather than an AI winter, but fundamental questions remain if true AI is even possible.
The dream of building a machine that can think like a human stretches back to the origins of electronic computers. But ever since research into artificial intelligence (AI) began in earnest after World War II, the field has gone through a series of boom and bust cycles called "AI summers" and "AI winters."
Each cycle begins with optimistic claims that a fully, generally intelligent machine is just a decade or so away. Funding pours in and progress seems swift. Then, a decade or so later, progress stalls and funding dries up. Over the last ten years, we've clearly been in an AI summer as vast improvements in computing power and new techniques like deep learning have led to remarkable advances. But now, as we enter the third decade of the 21st century, some who follow AI feel the cold winds at their back leading them to ask, "Is Winter Coming?" If so, what went wrong this time?
How to build an A.I. brain that can conceive of itself | Joscha Bach | Big Think www.youtube.com
A brief history of AI
To see if the winds of winter are really coming for AI, it is useful to look at the field's history. The first real summer can be pegged to 1956 and the famous Dartmouth University Workshop where one of the field's pioneers, John McCarthy, coined the term "artificial intelligence." The conference was attended by scientists like Marvin Minsky and H. A. Simon, whose names would go on to become synonymous with the field. For those researchers, the task ahead was clear: capture the processes of human reasoning through the manipulation of symbolic systems (i.e., computer programs).
Unless we are talking about very specific tasks, any 6-year-old is infinitely more flexible and general in his or her intelligence than the "smartest" Amazon robot.
Throughout the 1960s, progress seemed to come swiftly as researchers developed computer systems that could play chess, deduce mathematical theorems, and even engage in simple discussions with a person. Government funding flowed generously. Optimism was so high that, in 1970, Minsky famously proclaimed, "In three to eight years we will have a machine with the general intelligence of a human being."
By the mid 1970s, however, it was clear that Minsky's optimism was unwarranted. Progress stalled as many of the innovations of the previous decade proved too narrow in their applicability, seeming more like toys than steps toward a general version of artificial intelligence. Funding dried up so completely that researchers soon took pains not to refer to their work as AI, as the term carried a stink that killed proposals.
The cycle repeated itself in the 1980s with the rise of expert systems and the renewed interest in what we now call neural networks (i.e., programs based on connectivity architectures that mimic neurons in the brain). Once again, there was wild optimism and big increases in funding. What was novel in this cycle was the addition of significant private funding as more companies began to rely on computers as essential components of their business. But, once again, the big promises were never realized, and funding dried up again.
AI: Hype vs. reality
The AI summer we're currently experiencing began sometime in the first decade of the new millennium. Vast increases in both computing speed and storage ushered in the era of deep learning and big data. Deep learning methods use stacked layers of neural networks that pass information to each other to solve complex problems like facial recognition. Big data provides these systems with vast oceans of examples (like images of faces) to train on. The applications of this progress are all around us: Google Maps give you near-perfect directions; you can talk with Siri anytime you want; IBM's Deep Think computer beat Jeopardy's greatest human champions.
In response, the hype rose again. True AI, we were told, must be just around the corner. In 2015, for example, The Guardian reported that self-driving cars, the killer app of modern AI, was close at hand. Readers were told, "By 2020 you will become a permanent backseat driver." And just two years ago, Elon Musk claimed that by 2020 "we'd have over a million cars with full self-driving software."
The general intelligence — i.e., the understanding — we humans exhibit may be inseparable from our experiencing. If that's true, then our physical embodiment, enmeshed in a context-rich world, may be difficult if not impossible to capture in symbolic processing systems.
By now, it's obvious that a world of fully self-driving cars is still years away. Likewise, in spite of the remarkable progress we've made in machine learning, we're still far from creating systems that possess general intelligence. The emphasis is on the term general because that's what AI really has been promising all these years: a machine that's flexible in dealing with any situation as it comes up. Instead, what researchers have found is that, despite all their remarkable progress, the systems they've built remain brittle, which is a technical term meaning "they do very wrong things when given unexpected inputs." Try asking Siri to find "restaurants that aren't McDonald's." You won't like the results.
Unless we are talking about very specific tasks, any 6-year-old is infinitely more flexible and general in his or her intelligence than the "smartest" Amazon robot.
Even more important is the sense that, as remarkable as they are, none of the systems we've built understand anything about what they are doing. As philosopher Alva Noe said of Deep Think's famous Jeopardy! victory, "Watson answered no questions. It participated in no competition. It didn't do anything. All the doing was on our side. We played Jeapordy! with Watson." Considering this fact, some researchers claim that the general intelligence — i.e., the understanding — we humans exhibit may be inseparable from our experiencing. If that's true, then our physical embodiment, enmeshed in a context-rich world, may be difficult if not impossible to capture in symbolic processing systems.
Not the (AI) winter of our discontent
Thus, talk a of a new AI winter is popping up again. Given the importance of deep learning and big data in technology, it's hard to imagine funding for these domains drying up any time soon. What we may be seeing, however, is a kind of AI autumn when researchers wisely recalibrate their expectations and perhaps rethink their perspectives.
Once a week.
Subscribe to our weekly newsletter.
With the rise of Big Data, methods used to study the movement of stars or atoms can now reveal the movement of people. This could have important implications for cities.
- A treasure trove of mobility data from devices like smartphones has allowed the field of "city science" to blossom.
- I recently was part of team that compared mobility patterns in Brazilian and American cities.
- We found that, in many cities, low-income and high-income residents rarely travel to the same geographic locations. Such segregation has major implications for urban design.
Almost 55 percent of the world's seven billion people live in cities. And unless the COVID-19 pandemic puts a serious — and I do mean serious — dent in long-term trends, the urban fraction will climb almost to 70 percent by midcentury. Given that our project of civilization is staring down a climate crisis, the massive population shift to urban areas is something that could really use some "sciencing."
Is urbanization going to make things worse? Will it make things better? Will it lead to more human thriving or more grinding poverty and inequality? These questions need answers, and a science of cities, if there was such a thing, could provide answers.
Good news. There already is one!
The science of cities
With the rise of Big Data (for better or worse), scientists from a range of disciplines are getting an unprecedented view into the beating heart of cities and their dynamics. Of course, really smart people have been studying cities scientifically for a long time. But Big Data methods have accelerated what's possible to warp speed. As "exhibit A" for the rise of a new era of city science, let me introduce you to the field of "human mobility" and a new study just published by a team I was on.
Credit: nonnie192 / 405009778 via Adobe Stock
Human mobility is a field that's been amped up by all those location-enabled devices we carry around and the large-scale datasets of our activities, such as credit card purchases, taxi rides, and mobile phone usage. These days, all of us are leaving digital breadcrumbs of our everyday activities, particularly our movements around towns and cities. Using anonymized versions of these datasets (no names please), scientists can look for patterns in how large collections of people engage in daily travel and how these movements correlate with key social factors like income, health, and education.
There have been many studies like this in the recent past. For example, researchers looking at mobility patterns in Louisville, Kentucky found that low-income residents tended to travel further on average than affluent ones. Another study found that mobility patterns across different socioeconomic classes exhibit very similar characteristics in Boston and Singapore. And an analysis of mobility in Bogota, Colombia found that the most mobile population was neither the poorest nor the wealthiest citizens but the upper-middle class.
These were all excellent studies, but it was hard to make general conclusions from them. They seemed to point in different directions. The team I was part of wanted to get a broader, comparative view of human mobility and income. Through a partnership with Google, we were able to compare data from two countries — Brazil and the United States — of relatively equal populations but at different points on the "development spectrum." By comparing mobility patterns both within and between the two countries, we hoped to gain a better understanding of how people at different income levels moved around each day.
Mobility in Brazil vs. United States
Socioeconomic mobility "heatmaps" for selected cities in the U.S. and Brazil. The colors represent destination based on income level. Red depicts destinations traveled by low-income residents, while blue depicts destinations traveled by high-income residents. Overlapping areas are colored purple.Credit: Hugo Barbosa et al., Scientific Reports, 2021.
The results were remarkable. In a figure from our paper (shown above), it's clear that we found two distinct kinds of relationship between income and mobility in cities.
The first was a relatively sharp distinction between where people in lower and higher income brackets traveled each day. For example, in my hometown of Rochester, New York or Detroit, the places visited by the two income groups (e.g., job sites, shopping centers, doctors' offices) were relatively partitioned. In other words, people from low-income and high-income neighborhoods were not mixing very much, meaning they weren't spending time in the same geographical locations. In addition, lower income groups traveled to the city center more often, while upper income groups traveled around the outer suburbs.
The second kind of relationship was exemplified by cities like Boston and Atlanta, which didn't show this kind of partitioning. There was a much higher degree of mixing in terms of travel each day, indicating that income was less of a factor for determining where people lived or traveled.
In Brazil, however, all the cities showed the kind of income-based segregation seen in U.S. cities like Rochester and Detroit. There was a clear separation of regions visited with practically no overlap. And unlike the U.S., visits by the wealthy were strongly concentrated in the city centers, while the poor largely traversed the periphery.
Data-driven urban design
Our results have straightforward implications for city design. As we wrote in the paper, "To the extent that it is undesirable to have cities with residents whose ability to navigate and access resources is dependent on their socioeconomic status, public policy measures to mitigate this phenomenon are the need of the hour." That means we need better housing and public transportation policies.
But while our study shows there are clear links between income disparity and mobility patterns, it also shows something else important. As an astrophysicist who spent decades applying quantitative methods to stars and planets, I am amazed at how deep we can now dive into understanding cities using similar methods. We have truly entered a new era in the study of cities and all human systems. Hopefully, we'll use this new power for good.
Scientists should be cautious when expressing an opinion based on little more than speculation.
- In October 2017, a strange celestial object was detected, soon to be declared our first recognized interstellar visitor.
- The press exploded when a leading Harvard astronomer suggested the object to have been engineered by an alien civilization.
- This is an extraordinary conclusion that was based on a faulty line of scientific reasoning. Ruling out competing hypotheses doesn't make your hypothesis right.
Sometimes, when you are looking for something ordinary, you find the unexpected. This is definitely the case with the strange 'Oumuamua, which made international headlines as a potential interstellar visitor. Its true identity remained obscure for a while, as scientists proposed different explanations for its puzzling behavior. This is the usual scientific approach of testing hypotheses to make sense of a new discovery.
What captured the popular imagination was the claim that the object was no piece of rock or comet, but an alien artifact, designed by a superior intelligence.
Do you remember the black monolith tumbling through space in the classic Stanley Kubrick movie 2001: A Space Odyssey? The one that "inspired" our ape-like ancestors to develop technology and followed humanity and its development since then? What made this claim amazing is that it wasn't coming from the usual UFO enthusiasts but from a respected astrophysicist from Harvard University, Avi Loeb, and his collaborator Shmuel Bialy. Does their claim really hold water? Were we really visited by an alien artifact? How would we know?
A mystery at 200,000 miles per hour
Before we dive into the controversy, let's examine some history. 'Oumuamua was discovered accidentally by Canadian astronomer Robert Weryk while he was routinely reviewing images captured by the telescope Pan-STARRS1 (Panoramic Survey and Rapid Response System 1), situated atop the ten-thousand-foot Haleakala volcanic peak on the Hawaiian island of Maui. The telescope scans the skies in search of near-Earth objects, mostly asteroids and possibly comets that come close to Earth. The idea is to monitor the solar system to learn more about such objects and their orbits and, of course, to sound the alarm in case of a potential collision course with Earth. Contrary to the objects Weryk was used to seeing, mostly moving at about 40,000 miles per hour, this one was moving almost five times as fast — nearly 200,000 miles per hour, definitely an anomaly.
Intrigued, astronomers tracked the visitor while it was visible, concluding that it indeed must have come from outside our solar system, the first recognized interstellar visitor. Contrary to most known asteroids that move in elliptical orbits around the sun, 'Oumuamua had a bizarre path, mostly straight. Also, its brightness varied by a factor of ten as it tumbled across space, a very unusual property that could be caused either by an elongated cigar shape or by it being flat, like a CD, one side with a different reflectivity than the other. The object, 1I/2017 U1, became popularly known as 'Oumuamua, from the Hawaiian for "scout."
In their paper, Loeb and Bialy argue that the only way the object could be accelerated to the speeds observed was if it were extremely thin and very large, like a sail. They estimated that its thickness had to be between 0.3 to 0.9 millimeters, which is extremely thin. After confirming that such an object is robust enough to withstand the hardships of interstellar travel (e.g., collision with gas particles and dust grains, tensile stresses, rotation, and tidal forces), Loeb and Bialy conclude that it couldn't possibly be a solar system object like an asteroid or comet. Being thus of interstellar origin, the question is whether it is a natural or artificial object. This is where the paper ventures into interesting but far-fetched speculation.
I'm not saying it was aliens, but it was aliens
First, the authors consider that it might be garbage "floating in interstellar space as debris from advanced technological equipment," ejected from its own stellar system due to its non-functionality; essentially, alien space junk. Then, they suggest that a "more exotic scenario is that 'Oumuamua may be a fully operational probe sent intentionally to Earth vicinity by an alien civilization," [italicized as in the original] concluding that a "survey for lightsails as technosignatures in the solar system is warranted, irrespective of whether 'Oumuamua is one of them."
You can shoot down as many hypotheses as you want to vindicate yours, but this doesn't prove yours is the right one.
I have known Avi Loeb for decades and consider him a serious and extremely talented astrophysicist. His 2018 paper includes a suggestive interpretation of strange data that obviously sparks the popular imagination. Theoretical physicists routinely suggest the existence of traversable wormholes, multiverses, and parallel quantum universes. Not surprisingly, Loeb was highly in demand by the press to fill in the details of his idea. A book followed, Extraterrestrial: The First Sign of Intelligent Life Beyond Earth, and its description tells all: "There was only one conceivable explanation: the object was a piece of advanced technology created by a distant alien civilization."
This is where most of the scientific establishment began to cringe. One thing is to discuss the properties of a strange natural phenomenon and rule out more prosaic hypotheses while suggesting a daring one. Another is to declare to the public that the only conceivable explanation is one that is also speculative. An outsider will conclude that a reliable scientist has confirmed not only the existence of extraterrestrial life but of intelligent and technologically sophisticated extraterrestrial life with an interest in our solar system. I wonder if Loeb considered the impact of his words and how they reflect on the scientific community as a whole.
This is why aliens won't talk to us
Earlier this year, in a live public lecture hosted by the Catholic University of Chile, Avi Loeb locked horns with Jill Tarter, the scientist that is perhaps most identifiable as someone who spent her career looking for signs of extraterrestrial intelligence. (Coincidentally, I was the speaker that followed Loeb the next week in the same seminar series and was cautioned — along with the other panelists — to behave myself to avoid another showdown. I smiled, knowing that my topic was pretty tame in comparison. I mean, how can the limits of human knowledge compare with alien surveillance?)
The Loeb-Tarter exchange was awful and, it being a public debate, was picked up by the press. Academics can be rough like anyone else. But the issue goes deeper.
What scientists say matters. When should a scientist make public declarations about a cutting-edge topic with absolute certainty? I'd say never. There is no clear-cut certainty in cutting-edge science. There are hypotheses that should be tested more until there is community consensus. Even then, consensus is not guaranteed proof. The history of science is full of examples where leading scientists were convinced of something, only to be proven wrong later.
The epistemological mistake Loeb committed was to make an assertion that publicly amounted to certainty by using a process of elimination of other competing hypotheses. You can shoot down as many hypotheses as you want to vindicate yours, but this doesn't prove yours is the right one. It only means that the other hypotheses are wrong. I do, however, agree with Loeb when he says that 'Oumuamua should be the trigger for an increase in funding for the search for technosignatures, a way of detecting intelligent extraterrestrial life.
Reductionism offers a narrow view of the universe that fails to explain reality.
- Reductionism is the view that everything true about the world can be explained by atoms and their interactions.
- Emergence claims that reductionism is wrong, and the world can evolve new stuff and new laws that are not predictable from "nothing but" atoms.
- Which perspective on science is correct has huge implications, not only for ourselves but for everything from philosophy to economics to politics.
Stop me if you have heard this one before. "Sociologists defer to Psychologists. Psychologists defer to Neurologists. Neurologists defer to Biologists. Biologists defer to Chemists. Chemists defer to Physicists. Physicists defer to mathematicians. Mathematicians defer to God."
While told as a joke among physicists (and mathematicians, I suppose), what this little list really describes is a hierarchy where the truth of some fields reduces to the truths of others. This "reductionist" view is so prevalent in our culture that it's really a default or implicit philosophy of science floating around in people's heads even if they never explicitly think about it.
Today, I want to begin a series of explorations of this idea of reduction — and its alternative — for two reasons. First, I am pretty sure reductionism is wrong, and that's not the way the world works at all. Second, this view of the world is more than just a matter of philosophy. It has manifested in ways that can be dangerous for our future. For instance, how do we use the living world if we see it as "nothing but" resources? (My colleague Marcelo Gleiser has an answer to that.) What do we expect from artificial intelligence if we see ourselves as "nothing but" neurons?
Thankfully, there is another way of looking at science, truth, and the world which might be more correct and less dangerous. It's called emergence, and it's going to be the focus of this series.
The problem with reductionism
A duck, reducedCredit: Public Domain / Wikipedia
Let's take the 10,000-foot view to get an understanding of the problem. Here is a nice description of the reductionism perspective from philosopher Paul Humphreys:
"The world is nothing but spatiotemporal arrangements of fundamental physical objects and properties. You and I, rocks and galaxies, toads and scrambled eggs are just processes, the successive states of which are spatial arrangements of elementary physical objects. These elementary physical objects, arranged in different configurations, account for all the astonishing variety that we encounter in our day-to-day lives."
Those "fundamental objects" in Humphreys description are the elementary particles of physics: electrons, quarks, etc. So, the idea is that once you have made a list of all those elementary particles and once you know how those particles can interact (i.e., what forces they respond to), you are, in principle, done. Everything that can ever happen, everything that ever will happen is, in principle, encoded in that list of particles and their interactions. That's why, again in principle, all the truths the sociologist uncovers must ultimately be explained by the truths that the physicist has uncovered.
The simple picture reductionism offers of a world made solely of atoms can no longer be seen as the only "sober" view of science.
Of course, and this is important, sophisticated supporters of this kind of reductionist view have a sophisticated philosophical understanding of how the chain of causes goes upward, letting you go from quarks to mollusks to governments. That's why I want to unpack the questions reductionism raises over a series of posts. But the little description I've penned above demonstrates one of reductionism's most important consequences. It describes a world without fundamental novelty or essential innovation.
This is really a question of "bottom-up" predictability. If you know the fundamental entities and their laws, you can, in principle, predict everything that will or can happen. All of future history, all of evolution, is just a rearrangement of those electrons and quarks. In the reductionist view, you, your dog, your love for you dog, and the doggie love it feels for you are all nothing but arrangements and rearrangements of atoms. End of story.
An emerging challenge
"Emergence" is the alternative to this view. As philosophers Brigitte Falkenburg and Margaret Morrison put it, "A phenomenon is emergent if it cannot be reduced to, explained or predicted from its constituent parts… emergent phenomena arise out of lower-level entities, but they cannot be reduced to, explained nor predicted from their micro-level base." From an emergentist view, over the course of the universe's history, new entities and even new laws governing those entities have appeared.
The key is evolution.
According to at least one kind of emergentist, the universe most definitely has the capacity to innovate and create novelty. The process it uses is evolution, and evolution is more than just physics. So, from this view, while you are obviously made of atoms, you are also more than just atoms. You, your dog, and the specifics of your person-dog affection could not be predicted, even in principle, even from perfect knowledge of all your elementary particles.
As a philosophy, emergence was first introduced by a group of British philosophers in the early 20th century. They argued that phenomena like life and consciousness were so different from the systems physics studied, that they must represent new entities. But as the biochemical basis for life (e.g., DNA) was uncovered in the 1950s and 60s, interest in emergence waned. As Paul Humphreys notes, there wasn't even an entry for emergence in the 1967 Encyclopedia of Philosophy. Since then, however, critical developments in a number of fields have brought emergence back into view for both scientists and philosophers.
Science needs emergence
One of the most important reasons emergence has reappeared is science needs it. At the frontiers of research, there is a remarkable new field called complex systems. Drawing insights from physics, biology, and the study of social systems, the theory of complex systems has given scientists a wide range of examples where new entities and new rules appear to emerge from the networked interaction of simpler parts. Colloquially, the whole is greater than the sum of its parts.
These studies have drawn a new generation of philosophers to re-engage with the ideas of emergence, using the advances in science as a spur to unpack how chains of causation can be closed or opened and run from the bottom-up or the top-down. In these examinations, there have come distinctions like "weak" vs "strong" emergence, as well as those who challenge the need for that split. These are the kinds of issues I want to unpack in this series over the next few months.
To sum it up for now, when it comes to reductionism and emergence, there are many thorny issues that require scrutiny. What is clear, though, is that the simple picture reductionism offers of a world made solely of atoms can no longer be seen as the only "sober" view of science and its perspective on life, the universe, and everything.
A revolution of the mind must occur in order for humanity to succeed on a finite planet.
- President Biden's energy summit is emblematic of an emerging mindset that is set to redefine our relation to the planet.
- 150 years of unchecked industrial and economic growth have changed humanity in profound ways but at a high and untenable environmental cost.
- We must move from the plundering mindset that sucked our prosperity from the bowels of the Earth to one that collects the energy that the skies serve us.
Rarely, if ever, do we stop to think about how remarkable certain everyday comforts are: to flick an electric switch and have light inundate a dark room; to turn on a faucet and have drinking water; to take a hot shower; to live in a home that is cool in hot days and warm in cool days; to step into a metal box and move wherever we want; to go to a store and buy food; to talk to someone across the world; to dump dirty clothes into a machine and have it wash it all. The list is endless.
Now, go back 150 years to 1871. Life was completely different. Energy was scarce; animals pulled plows and carriages; steam engines were beginning to flourish; technology was very primitive compared to today; medicine had yet to understand disease and sterilization. There were no telephones. Cars and airplanes were not invented yet. Light bulbs were still a laboratory curiosity. People drank crude oil as medicine. The first gasoline-fueled combustion engine car was still five years away, invented by Carl Benz, in Germany. The world population was about 900 million.
The pros and cons of technological progress
But look at us now! Fossil fuels transformed the world. Technology transformed the world. Life expectancy in the U.S. went from 39.4 years to 78.8 years. The world population grew to 7.8 billion, and well over 200,000 cars are built per day.
It's an amazing story of success for our species. And of catastrophic environmental devastation.
Even if technological innovation has its roots in basic research, the driver for the transition from the lab to the marketplace is money. Growth is measured by sales, and sales generate profit. In the past 150 years, the gross domestic product per capita in the U.S., Australia, New Zealand, and Canada (known collectively as Western Offshoots) grew from $4,647 to $53,757 (corrected for inflation and measured in international 2011 prices).
What feeds these fat pockets? Fossil fuels, deforestation, mining, the depletion of the oceans, industrialized agriculture. The obvious truth is becoming clearer to a growing number of people: we live on a finite planet, with finite resources, and with a finite capability of cleaning the mess we make. The time of treating the oceans and the rivers as giant sewage dumps, the atmosphere as an endless sponge for noxious fumes, and the forests as inconvenient obstacles to be removed for expansive cattle grazing and agriculture is over.
I'm glad to be alive to witness our reinvention.
The essential question, then, is what can be done? Is it possible to maintain the current growth rate based on a profoundly different worldview, one where the fuel that feeds growth is not unchecked environmental destruction but a symbiotic relationship between our species and the planet we inhabit? Can the economy adapt to a new worldview before we inflict even more irreversible damage to the planet?
The first point to keep in mind is that we are not separate from the environmental devastation we perpetrate. If the environment goes, we go. We need clean air, clean water, and clean energy to survive. The more of us there are, the more urgent this obvious fact becomes. The inventiveness and resourcefulness that we have traditionally applied to industrial and warfare innovation must now be applied to our own survival on this planet. We need to reinvent how we relate to the world. We must move from the plundering mindset that sucked our prosperity from the bowels of the Earth to one that collects the energy that the skies serve us.
A revolution of the mind
Credit: Jason Blackeye via Unsplash
This change in mindset represents a reversal from an aggressive relation to the environment — the metallic machines that dig holes to suck fossil fuels from the underground — to one that embraces what is already here: the sun, the wind, and the carbon-fixing capabilities of forestlands across the world.
Last week, President Biden convened 40 world leaders to discuss our collective energy future. The current administration clearly represents the new mindset. We must change the way we think about economic profit being averse to renewable energy. The old worldview, based on the past 150 years of the industrial growth motto — that is, let's consume the bowels of the Earth to get rich — is dead. It's unviable. It's unsustainable. It's self-destructive. It's immoral.
The changes to come will be as world-changing as the ones that exploded during the early 20th century with rampant industrialization: An economy based on the passive extraction of renewable energy from the skies; vast reforestation programs for carbon fixing; a complete overhaul of the auto industry toward electric and hydrogen-cell vehicles; a retraining of the workforce to adapt to the growing automation of production and to the need for versatility in the marketplace due to the new jobs of the digital age; a redesign of school curricula to retell the story of our relation to the environment to raise awareness among younger generations; and an emergent new ethics of life that embraces the planet and all living creatures we share it with as partners and not targets.
A decade or so ago, these views would be dismissed as utopic or at least naïve. But not anymore. The new worldview is taking root, and foolish is the country that won't embrace it quickly. I'm glad to be alive to witness our reinvention.
It is impossible for science to arrive at ultimate truths, but functional truths are good enough.
- What is truth? This is a very tricky question, trickier than many would like to admit.
- Science does arrive at what we can call functional truth, that is, when it focuses on what something does as opposed to what something is. We know how gravity operates, but not what gravity is, a notion that has changed over time and will probably change again.
- The conclusion is that there are not absolute final truths, only functional truths that are agreed upon by consensus. The essential difference is that scientific truths are agreed upon by factual evidence, while most other truths are based on belief.
Does science tell the truth? The answer to this question is not as simple as it seems, and my 13.8 colleague Adam Frank took a look at it in his article about the complementarity of knowledge. There are many levels of complexity to what truth is or means to a person or a community. Why?
First, "truth" itself is hard to define or even to identify. How do you know for sure that someone is telling you the truth? Do you always tell the truth? In groups, what may be considered true to a culture with a given set of moral values may not be true in another. Examples are easy to come by: the death penalty, abortion rights, animal rights, environmentalism, the ethics of owning weapons, etc.
At the level of human relations, truth is very convoluted. Living in an age where fake news has taken center stage only corroborates this obvious fact. However, not knowing how to differentiate between what is true and what is not leads to fear, insecurity, and ultimately, to what could be called worldview servitude — the subservient adherence to a worldview proposed by someone in power. The results, as the history of the 20th century has shown extensively, can be catastrophic.
Proclamations of final or absolute truths, even in science, shouldn't be trusted.
The goal of science, at least on paper, is to arrive at the truth without recourse to any belief or moral system. Science aims to go beyond the human mess so as to be value-free. The premise here is that Nature doesn't have a moral dimension, and that the goal of science is to describe Nature the best possible way, to arrive at something we could call the "absolute truth." The approach is a typical heir to the Enlightenment notion that it is possible to take human complications out of the equation and have an absolute objective view of the world. However, this is a tall order.
It is tempting to believe that science is the best pathway to truth because, to a spectacular extent, science does triumph at many levels. You trust driving your car because the laws of mechanics and thermodynamics work. NASA scientists and engineers just managed to have the Ingenuity Mars Helicopter — the first man-made device to fly over another planet — hover above the Martian surface all by itself.
We can use the laws of physics to describe the results of countless experiments to amazing levels of accuracy, from the magnetic properties of materials to the position of your car in traffic using GPS locators. In this restricted sense, science does tell the truth. It may not be the absolute truth about Nature, but it's certainly a kind of pragmatic, functional truth at which the scientific community arrives by consensus based on the shared testing of hypotheses and results.
What is truth?
Credit: Sergey Nivens via Adobe Stock / 242235342
But at a deeper level of scrutiny, the meaning of truth becomes intangible, and we must agree with the pre-Socratic philosopher Democritus who declared, around 400 years BCE, that "truth is in the depths." (Incidentally, Democritus predicted the existence of the atom, something that certainly exists in the depths.)
A look at a dictionary reinforces this view. "Truth: the quality of being true." Now, that's a very circular definition. How do we know what is true? A second definition: "Truth: a fact or belief that is accepted as true." Acceptance is key here. A belief may be accepted to be true, as is the case with religious faith. There is no need for evidence to justify a belief. But note that a fact as well can be accepted as true, even if belief and facts are very different things. This illustrates how the scientific community arrives at a consensus of what is true by acceptance. Sufficient factual evidence supports that a statement is true. (Note that what defines sufficient factual evidence is also accepted by consensus.) At least until we learn more.
Take the example of gravity. We know that an object in free fall will hit the ground, and we can calculate when it does using Galileo's law of free fall (in the absence of friction). This is an example of "functional truth." If you drop one million rocks from the same height, the same law will apply every time, corroborating the factual acceptance of a functional truth, that all objects fall to the ground at the same rate irrespective of their mass (in the absence of friction).
But what if we ask, "What is gravity?" That's an ontological question about what gravity is and not what it does. And here things get trickier. To Galileo, it was an acceleration downward; to Newton a force between two or more massive bodies inversely proportional to the square of the distance between them; to Einstein the curvature of spacetime due to the presence of mass and/or energy. Does Einstein have the final word? Probably not.
Is there an ultimate scientific truth?
Final or absolute scientific truths assume that what we know of Nature can be final, that human knowledge can make absolute proclamations. But we know that this can't really work, for the very nature of scientific knowledge is that it is incomplete and contingent on the accuracy and depth with which we measure Nature with our instruments. The more accuracy and depth our measurements gain, the more they are able to expose the cracks in our current theories, as I illustrated last week with the muon magnetic moment experiments.
So, we must agree with Democritus, that truth is indeed in the depths and that proclamations of final or absolute truths, even in science, shouldn't be trusted. Fortunately, for all practical purposes — flying airplanes or spaceships, measuring the properties of a particle, the rates of chemical reactions, the efficacy of vaccines, or the blood flow in your brain — functional truths do well enough.