If you ask your maps app to find "restaurants that aren't McDonald's," you won't like the result.
- The Chinese Room thought experiment is designed to show how understanding something cannot be reduced to an "input-process-output" model.
- Artificial intelligence today is becoming increasingly sophisticated thanks to learning algorithms but still fails to demonstrate true understanding.
- All humans demonstrate computational habits when we first learn a new skill, until this somehow becomes understanding.
It's your first day at work, and a new colleague, Kendall, catches you over coffee.
"You watch the game last night?" she says. You're desperate to make friends, but you hate football.
"Sure, I can't believe that result," you say, vaguely, and it works. She nods happily and talks at you for a while. Every day after that, you live a lie. You listen to a football podcast on the weekend and then regurgitate whatever it is you hear. You have no idea what you're saying, but it seems to impress Kendall. You somehow manage to come across as an expert, and soon she won't stop talking football with you.
The question is: do you actually know about football, or are you imitating knowledge? And what's the difference? Welcome to philosopher John Searle's "Chinese Room."
The Chinese Room
Searle's argument was designed as a critique of what's called a "functionalist" view of mind. This is the philosophy that argues that our mind can be explained fully by what role it plays, or in other words, what it does or what "function" it has.
One form of functionalism sees the human mind as following an "input-process-output" model. We have the input of our senses, the process of our brains, and a behavioral output. Searle thought this was at best an oversimplification, and his Chinese Room thought experiment goes to show how human minds are not simply biological computers. It goes like this:
Imagine a room, and inside is John, who can't speak a word of Chinese. Outside the room, a Chinese person sends a message into the room in Chinese. Luckily, John has an "if-then" book for Chinese characters. For instance, if he gets <你好吗>, the proper reply is <我还好>. All John has to do is follow his instruction book.
The Chinese speaker outside of the room thinks they're talking to someone inside who knows Chinese. But in reality, it's just John with his fancy book.
What is understanding?
Does John understand Chinese? The Chinese Room is, by all accounts, a computational view of the mind, yet it seems that something is missing. Truly understanding something is not an "if-then" automated response. John is missing that sinking in feeling, the absorption, the bit of understanding that's so hard to express. Understanding a language doesn't work like this. Humans are not Google Translate.
And yet, this is how AIs are programmed. A computer system is programmed to provide a certain output based on a finite list of certain inputs. If I double click the mouse, I open a file. If you type a letter, your monitor displays tiny black squiggles. If we press the right buttons in order, we win at Mario Kart. Input — Process — Output.
Can imitation become so fluid or competent that it is understanding.
But AIs don't know what they're doing, and Google Translate doesn't really understand what it's saying, does it? They're just following a programmer's orders. If I say, "Will it rain tomorrow?" Siri can look up the weather. But if I ask, "Will water fall from the clouds tomorrow?" it'll be stumped. A human would not (although they might look at you oddly).
A fun way to test just how little an AI understands us is to ask your maps app to find "restaurants that aren't McDonald's." Unsurprisingly, you won't get what you want.
The Future of AI
To be fair, the field of artificial intelligence is just getting started. Yes, it's easy right now to trick our voice assistant apps, and search engines can be frustratingly unhelpful at times. But that doesn't mean AI will always be like that. It might be that the problem is only one of complexity and sophistication, rather than anything else. It might be that the "if-then" rule book just needs work. Things like "the McDonald's test" or AI's inability to respond to original questions reveal only a limitation in programming. Given that language and the list of possible questions is finite, it's quite possible that AI will be able to (at the very least) perfectly mimic a human response in the not too distant future.
What's more, AIs today have increasingly advanced learning capabilities. Algorithms are no longer simply input-process-output but rather allow systems to search for information and adapt anew to what they receive.
A notorious example of this occurred when a Microsoft chat bot started spouting bigotry and racism after "learning" from what it read on Twitter. (Although, this might just say more about Twitter than AI.) Or, more sinister perhaps, two Facebook chat bots were shut down after it was discovered that they were not only talking to each other but were doing so in an invented language. Did they understand what they were doing? Who's to say that, with enough learning and enough practice, an AI "Chinese Room" might not reach understanding?
Can imitation become understanding?
We've all been a "Chinese Room" at times — be it talking about sports at work, cramming for an exam, using a word we didn't entirely know the meaning of, or calculating math problems. We can all mimic understanding, but it also begs the question: can imitation become so fluid or competent that it is understanding.
The old adage "fake it, 'till you make it" has been proven true over and over. If you repeat an action enough times, it becomes easy and habitual. For instance, when you practice a language, musical instrument, or a math calculation, then after a while, it becomes second nature. Our brain changes with repetition.
So, it might just be that we all start off as Chinese Rooms when we learn something new, but this still leaves us with a pertinent question: when, how, and at what point does John actually understand Chinese? More importantly, will Siri or Alexa ever understand you?
This spring, a U.S. and Chinese team announced that it had successfully grown, for the first time, embryos that included both human and monkey cells.
In the novel, technicians in charge of the hatcheries manipulate the nutrients they give the fetuses to make the newborns fit the desires of society. Two recent scientific developments suggest that Huxley's imagined world of functionally manufactured people is no longer far-fetched.
On March 17, 2021, an Israeli team announced that it had grown mouse embryos for 11 days – about half of the gestation period – in artificial wombs that were essentially bottles. Until this experiment, no one had grown a mammal embryo outside a womb this far into pregnancy. Then, on April 15, 2021, a U.S. and Chinese team announced that it had successfully grown, for the first time, embryos that included both human and monkey cells in plates to a stage where organs began to form.
As both a philosopher and a biologist I cannot help but ask how far researchers should take this work. While creating chimeras – the name for creatures that are a mix of organisms – might seem like the more ethically fraught of these two advances, ethicists think the medical benefits far outweigh the ethical risks. However, ectogenesis could have far-reaching impacts on individuals and society, and the prospect of babies grown in a lab has not been put under nearly the same scrutiny as chimeras.
Mouse embryos were grown in an artificial womb for 11 days, and organs had begun to develop.
Growing in an artificial womb
When in vitro fertilization first emerged in the late 1970s, the press called IVF embryos “test-tube babies," though they are nothing of the sort. These embryos are implanted into the uterus within a day or two after doctors fertilize an egg in a petri dish.
Before the Israeli experiment, researchers had not been able to grow mouse embryos outside the womb for more than four days – providing the embryos with enough oxygen had been too hard. The team spent seven years creating a system of slowly spinning glass bottles and controlled atmospheric pressure that simulates the placenta and provides oxygen.
This development is a major step toward ectogenesis, and scientists expect that it will be possible to extend mouse development further, possibly to full term outside the womb. This will likely require new techniques, but at this point it is a problem of scale – being able to accommodate a larger fetus. This appears to be a simpler challenge to overcome than figuring out something totally new like supporting organ formation.
The Israeli team plans to deploy its techniques on human embryos. Since mice and humans have similar developmental processes, it is likely that the team will succeed in growing human embryos in artificial wombs.
To do so, though, members of the team need permission from their ethics board.
CRISPR – a technology that can cut and paste genes – already allows scientists to manipulate an embryo's genes after fertilization. Once fetuses can be grown outside the womb, as in Huxley's world, researchers will also be able to modify their growing environments to further influence what physical and behavioral qualities these parentless babies exhibit. Science still has a way to go before fetus development and births outside of a uterus become a reality, but researchers are getting closer. The question now is how far humanity should go down this path.
Chimeras evoke images of mythological creatures of multiple species – like this 15th-century drawing of a griffin – but the medical reality is much more sober. (Martin Schongauer/WikimediaCommons)
Human–monkey hybrids might seem to be a much scarier prospect than babies born from artificial wombs. But in fact, the recent research is more a step toward an important medical development than an ethical minefield.
If scientists can grow human cells in monkeys or other animals, it should be possible to grow human organs too. This would solve the problem of organ shortages around the world for people needing transplants.
But keeping human cells alive in the embryos of other animals for any length of time has proved to be extremely difficult. In the human-monkey chimera experiment, a team of researchers implanted 25 human stem cells into embryos of crab-eating macaques – a type of monkey. The researchers then grew these embryos for 20 days in petri dishes.
After 15 days, the human stem cells had disappeared from most of the embryos. But at the end of the 20-day experiment, three embryos still contained human cells that had grown as part of the region of the embryo where they were embedded. For scientists, the challenge now is to figure out how to maintain human cells in chimeric embryos for longer.
Regulating these technologies
Some ethicists have begun to worry that researchers are rushing into a future of chimeras without adequate preparation. Their main concern is the ethical status of chimeras that contain human and nonhuman cells – especially if the human cells integrate into sensitive regions such as a monkey's brain. What rights would such creatures have?
However, there seems to be an emerging consensus that the potential medical benefits justify a step-by-step extension of this research. Many ethicists are urging public discussion of appropriate regulation to determine how close to viability these embryos should be grown. One proposed solution is to limit growth of these embryos to the first trimester of pregnancy. Given that researchers don't plan to grow these embryos beyond the stage when they can harvest rudimentary organs, I don't believe chimeras are ethically problematic compared with the true test–tube babies of Huxley's world.
Few ethicists have broached the problems posed by the ability to use ectogenesis to engineer human beings to fit societal desires. Researchers have yet to conduct experiments on human ectogenesis, and for now, scientists lack the techniques to bring the embryos to full term. However, without regulation, I believe researchers are likely to try these techniques on human embryos – just as the now-infamous He Jiankui used CRISPR to edit human babies without properly assessing safety and desirability. Technologically, it is a matter of time before mammal embryos can be brought to term outside the body.
While people may be uncomfortable with ectogenesis today, this discomfort could pass into familiarity as happened with IVF. But scientists and regulators would do well to reflect on the wisdom of permitting a process that could allow someone to engineer human beings without parents. As critics have warned in the context of CRISPR-based genetic enhancement, pressure to change future generations to meet societal desires will be unavoidable and dangerous, regardless of whether that pressure comes from an authoritative state or cultural expectations. In Huxley's imagination, hatcheries run by the state grew a large numbers of identical individuals as needed. That would be a very different world from today.
Sahotra Sarkar, Professor of Philosophy and Integrative Biology, The University of Texas at Austin College of Liberal Arts
Many people believe that in the face of profound evil, they would have the courage to speak up. It might be harder than we think.
- After World War II, many psychologists wanted to address the question of how it was that people could go along with the evil deeds of fascist regimes.
- Solomon Asch's experiment alarmingly showed just how easily we conform and how susceptible we are to group influence.
- People often will not only sacrifice truth and reason to conformity but also their own health and sense of right and wrong.
It's the last question of the quiz, and Chloë knows the answer: it's Bolivia. Yes, it's definitely Bolivia. She went there last year, so she ought to know.
But then Shaun says it's Panama, and all the others agree with him. Chloë's sure it's Bolivia, but Shaun's so confident, the others now are nodding furiously along with him.
"What do you think, Chloë?" she's asked. She pauses for a moment.
"Yeah... Shaun's probably right. Put Panama," she mumbles.
The question of conformity
We've all been Chloë. Humans are social animals with families, tribes, and workplaces. So, it's no wonder that we try to fit in or conform. Social rejection is devastating, and we're biologically wired to avoid it. A sense of belonging and cooperation is essential to dealing with the world. Sometimes, though, this instinct can take us to ridiculous or dark places.
In the decades after World War II, politicians and academics were curious to know how it was that a country like Germany — so steeped in tradition, culture, and education — could fall into such a terrible regime within such a short time. Psychologists Stanley Milgram and Philip Zimbardo conducted experiments to answer a question many everyday people were asking: "Could it happen here?"
Would you tell a laughing group of people that a joke was sexist or racist or bigoted? In your heart of hearts, do you think that you — surely a loving and kind person — would have had the courage to resist Nazism?
While the Milgram and Zimbardo experiments are pretty famous, one of the lesser known experiments was done in the early 1950s by Solomon Asch. They demonstrated just how far humans are willing to go for the sake of "fitting in" and conforming to the rules.
The Asch experiment
Asch had his volunteers perform a simple task: they were all given a series of lines drawn on a card and asked to choose which line was longest out of three options. The right answer was laughably obvious; for instance, line A was clearly the longest. When they were alone, people chose correctly nearly every time.
Asch then put his subjects in a group with actors who had been instructed to deliberately choose the wrong answer. Under these conditions, 75 percent of subjects agreed with the group consensus at least once, even though they were blatantly wrong.
What makes us conform?
A little surprised by this, Asch went on to do a series of related experiments and documented the factors that made it more or less likely that people will "conform" with the group consensus. Here are some of them:
The difficulty of the task. When there's a higher degree of ambiguity or uncertainty about the answer (for instance, the lines in the experiment weren't so obviously different), we're more likely to agree with others.
Reliability of the source. If someone within the group seems more reliable or knowledgeable about a topic — like a doctor about a disease — then we are more likely to go along with that person's view.
Publicity. People are much more likely to conform if they have to declare their judgment publicly rather than privately.
Degree of unanimity. The presence of merely one or two dissenting voices in a group of any size greatly increases the chances that others will not conform. Even one rebellious response is enough to make others follow suit.
The implications of conformity
Of course, conformity has implications far beyond quizzes with your friends or measuring lines.
A similar but more alarming study was conducted by John Darley and Bibb Latané in the 1970s. In this study, they had subjects appear for an apparent "job interview." As the subjects were waiting, smoke was slowly pumped into the room. If people were alone, they always would check to see what was wrong, or they would get up and leave.
But when subjects were in a room with actors pretending as if nothing was wrong, the majority made no move whatsoever. This happened despite people coughing and rubbing their eyes from all the smoke. Amazingly, people were willing to risk their own health rather than break with group behavior. (No wonder many of us are hesitant to interrupt a meeting at work to open a window because it's far too hot in the room.)
What do these experiments suggest about conformity? Well, as Asch said, we learned "that intelligent and well-meaning young people are willing to call white black." He concluded that it was "concerning." Indeed.
Would you tell a laughing group of people that a joke was sexist or racist or bigoted? In your heart of hearts, do you think that you — surely a loving and kind person — would have had the courage to resist Nazism? Psychology experiments strongly suggest you would not.
Undoubtedly, there are huge evolutionary, social, and emotional benefits to conformity. Many times, it has done great good. But equally true is that conformity can also bring out the darkest and worst in us.
Regularities, which we associate with laws of nature, require an explanation.
- The nomological argument for the existence of God comes from the Greek nomos or "law," because it's based on the laws of nature.
- There are pragmatic, aesthetic, and moral reasons for regularities to exist in nature.
- The best explanation may be the existence of a personal God rather than mindless laws or chance.
Here's a new version of an old argument for the existence of God. It's called the "nomological argument," after the Greek nomos or "law," because it's based on laws of nature.
Suppose that you receive five consecutive royal flushes in a game of poker. What explains this? You could have received them by chance, but that seems unlikely. A better explanation is that someone has arranged the decks in your favor.
Similarly, we can ask for an explanation of why nature is full of regularities, such as that planets have elliptical orbits and that oppositely charged particles attract. As with your sequence of hands, these regularities could be the result of chance, but that seems unlikely. A better explanation is that something is responsible for them. But what?
To clarify, we're not asking why we have the specific regularities that we do in fact have. Thus, we're not asking why the laws of nature appear to be fine-tuned to support life: for example, that gravity is the correct strength to permit the formation of stars. We think that's an interesting question but not our present topic. (See our "Further Reading" section below if you want to learn more.) Similarly, we're not talking about "intelligent design"; we're not asking why well-adapted species exist today. We think that can be adequately explained by citing regularities of natural selection and genetics. Our question is more general: Why are there any regularities at all, as opposed to irregularities?
Regularities: The nomological argument for the existence of God
Credit: NASA / JPL-Caltech / Space Science Institute
According to the nomological argument, the best explanation of regularities involves a supernatural personal being, God. It's not necessary for God to have all the attributes of a theistic or Biblical god — namely, omnipotence, omniscience, and moral perfection — but only that God is an intelligent being with the power to control whether nature exhibits regularities. In other words, this argument holds that regularities in nature are analogous to your winning poker hands.
To begin, why does the best explanation of your sequence of royal flushes involve a person? Well, we can think of pragmatic, aesthetic, and even moral reasons why a person might want to impose order on decks of cards. A pragmatic reason is about self-interest: someone might impose order on the deck of cards because they want you to win some money. An aesthetic reason is about elegance or beauty: royal flushes might just look nice. And maybe a moral reason could be that you deserve to win.
Similarly, we can think of pragmatic, aesthetic, and even moral reasons why God might want to impose regularities on nature: notably, most of the valuable things we know of (such as happiness, love, rationality, knowledge, or meaningfully free choices) cannot be realized in worlds without regularities. And since God is a person, we have reason to think that God might have moral and aesthetic preferences. Indeed, this would be so even if God were evil or had poor taste, since almost any moral and aesthetic states of affairs require some degree of regularity. As a result, if you knew that a personal being was about to create a world, you wouldn't be unreasonable in anticipating regularities, even if you knew nothing else about that being.
Objections and further development
At this point, someone might object as follows: Do we really need to invoke God? Doesn't Occam's Razor say we should prefer a simpler explanation or not posit this extra, unnecessary thing? Well, positing God doesn't really commit us to much more than other explanations of regularity would; they too would posit additional entities.
For example, suppose we try to posit laws of nature to explain regularities instead of God. We all have some idea of what a law of nature is supposed to be: Newton's laws of motion, the law that nothing can travel faster than the speed of light, or the ideal gas laws. Scientists posit laws such as these to explain things all the time. However, scientists typically assume that there are regularities, and they try to determine which ones are the most significant, important, or fundamental. When they've found one, they call it a "law of nature." In their role as scientists, they don't try to explain why there are fundamental laws of nature in the first place. So if we want to explain why there are regularities as opposed to irregularities — indeed, if we want to explain why science is possible at all — we have to do some philosophy. If we were going to explain regularities by positing laws, we'd first have to say what a law is.
This appeal to God has some important explanatory virtues and that, as a result, it deserves serious consideration as an explanation of why there are regularities.
There are philosophical accounts of laws that do not involve God, but those that attempt to explain regularities all do so by positing extra entities, too. These involve exotic things such as Platonic universals, Aristotelian natural kind essences, or other sorts of primitive necessities. As far as Occam's Razor is concerned, that's no better than positing God.
Moreover, these competing theories face a different problem. Positing mindless laws of nature with no ultimate explanation just seems to push the problem back. Now we have yet another interesting phenomenon to explain. Why did the laws that just randomly happened to exist generate regularities, which are only a relatively tiny portion of the possible set of events? To return to our analogy, it wouldn't be satisfying to say that you got five royal flushes in a row because some mindless law just happened to guarantee that result. (Why wasn't there a different law, one that generated any one of the octillions of other possible sequences instead? Just a huge coincidence?) In any case, we say a lot more in our journal article about why other explanations, such as alternative philosophical accounts of the nature of laws, don't do a great job of explaining regularities.
One might worry that positing God pushes the problem back in exactly the same way: What explains the existence of God? Well, everyone has to posit something, and we can always ask for an explanation of those things. Because positing God is relatively modest, we think it's more or less on the same footing as positing anything else — maybe no philosophical theory can really explain its fundamental entities. However, positing God answers a difficult question that other accounts don't: namely, why are there regularities as opposed to irregularities? To posit nothing, or pure, random chance, is modest but doesn't do a good job of explaining: random chance doesn't explain the five royal flushes. To posit some mindless explanation that just happened, coincidentally, to give us something as complex and consistent as a regularity does a good job of explaining but isn't really modest: your poker opponent would be very skeptical if you posited something as complex and coincidental as that as an explanation of your five royal flushes. (For those familiar with Bayesian reasoning, we're arguing that "God" strikes the best balance between prior probability of the explanation and likelihood of the phenomenon to be explained.) As a result, it doesn't merely push back the specific problem that concerns us.
Another objection might be that we've just posited a "God of the gaps" — simply positing God ad hoc when there's some gap in our knowledge. However, we haven't argued, "We don't know why laws of nature exist, and therefore, God did it." Instead, we've argued as follows: We know why God would create regularities, but we don't know why random chance or some mindless law would. And recall, the version of God we've described — simply a person with the power to control whether there are regularities — is relatively modest. Therefore, God provides a pretty good explanation of these regularities.
We'll mention one last objection. Proponents of a multiverse might say that regularity isn't surprising, because the probability that at least one universe exhibits regularity is high. Some proponents of a multiverse are motivated by scientific considerations. However, since the relevant scientific theories (inflation, string theory, many-worlds interpretations of quantum mechanics) posit underlying regularities that generate and maintain the multiverse, we can simply ask what explains those regularities. Other proponents of a multiverse are motivated by philosophical considerations — for example, that we should posit a plurality of possible worlds to make sense of our concepts of possibility and necessity. This might be a good reason to posit possible worlds, but it doesn't really explain regularities in our world. After all, you wouldn't find your sequence of royal flushes any less surprising upon learning that poker is a very popular game.
Philosophy is hard
One last disclaimer: Philosophy can be really hard. We don't claim to provide a proof, or even an especially strong argument, for the existence of God. Instead, we merely claim that this appeal to God has some important explanatory virtues and that, as a result, it deserves serious consideration as an explanation of why there are regularities.
Though modest, this conclusion is noteworthy. As we alluded to above, scientific practice requires regularities. By providing a philosophical explanation of regularities, we are trying to explain why science is possible in the first place. Relatedly, many Early Modern philosophers thought that scientific investigation of the natural world allowed us insight into the mind of God. If God's relation to the laws of nature might be as we've suggested, theists should have a very positive attitude towards the sciences. Likewise, those who prefer naturalistic or atheistic accounts should at least be open-minded about the relationship between science and religion. This is not a new lesson, but it provides a further illustration of the fact that, while there may be no role for God or other supernatural entities in scientific explanations, this does not mean that science itself is necessarily at odds with religious belief.
Suggestions for further reading
The journal article on which this essay is based is:
Tyler Hildebrand and Thomas Metcalf, "The Nomological Argument for the Existence of God." Noûs. DOI 10.1111/nous.12364 (available on EarlyView)
For a book length defense of a divine explanation of regularities, see:
John Foster, The Divine Lawmaker. Oxford University Press, 2004
For an introduction to the metaphysics of laws of nature, see:
Tyler Hildebrand, "Non-Humean Theories of Natural Necessity." Philosophy Compass 15, 2020
For more on multiverse-style objections to design arguments, see:
Thomas Metcalf, "On Friederich's New Fine-Tuning Argument," Foundations of Physics 51, 2021
Thomas Metcalf, "Fine-Tuning the Multiverse," Faith and Philosophy 35, 2018
For readers interested in the role of God philosophical accounts of laws in the Early Modern period, see:
Ott & Patton's Laws of Nature (Oxford University Press, 2018)
Ott's Causation and Laws of Nature in Early Modern Philosophy (Oxford University Press, 2009)
For introductory essays aimed at relative beginners, see:
Thomas Metcalf, "Design Arguments for the Existence of God," in 1000-Word Philosophy: https://1000wordphilosophy.com/2018/02/28/design-a...
Thomas Metcalf, "Philosophy and its Contrast with Science," in 1000-Word Philosophy: https://1000wordphilosophy.com/2018/02/13/philosop...
Michael Zerella, "Laws of Nature," in 1000-Word Philosophy: https://1000wordphilosophy.com/2014/02/17/laws-of-...
Philosopher and logician Kurt Gödel upended our understanding of mathematics and truth.
- In 1900, mathematician David Hilbert laid down 23 problems for the mathematics world to solve, the biggest of which was how to prove mathematics itself.
- Far from solving the issue, Kurt Gödel showed just how groundless the axioms of mathematics are.
- Gödel's theorem does not devalue mathematics but reveals that some truths are unprovable.
Everything's a bit crazy at the moment. We're drowning in a sea of lies, half-truths, polarization, debate, argument, and uncertainty. But at least there's math, right? That one sanctuary of truth and certainty. It's the algebraic flotsam we can grip on to, before we're swept away.
Well… look away now if you like it like that, because Kurt Gödel might be about to snatch even that away. His incompleteness theorems shook the foundations of the (math) universe. In fact, he rather did away with those foundations altogether.
In the early 20th century, the famous mathematician David Hilbert laid down 23 problems for the mathematics world to solve. Some of them are particularly esoteric, but the big ones concerned the issues of math's consistency and completeness. Hilbert hated the fact that the whole of mathematics depended on certain "axioms" that were, themselves, not proven. He wanted no loose ends, paradoxes, or unproven items. This was math after all!
Gödel, though, rather quashed all that.
Gödel would have hated what the postmodernists made of his work.
To see how, we have to first know that "axioms" are those statements that we accept as true before we go about doing math. They're like the letters needed to make words. For example, A + B = B + A is an axiom, as are all the functions of arithmetic and so on. Simply put, axioms are the building blocks of mathematics. They're as true for Euclid, drawing squares in ancient Greek dust, as they are for a pained 15-year-old, frowning over some calculus.
The problem is that these axioms are not proven. They're true because they always work, and we observe them as true all of the time. But they're not proven.
Imagine the whole of mathematics as a huge sack, and inside are all the possible things math can do. It's a mighty big sack, indeed. What Gödel proved is that, first, there exists in this sack a set of things which cannot be proven or disproven, such as axioms. Second, there is no possible way to prove these axioms from within that sack. It's impossible for math, on its own, to prove its own axioms.
Essentially, it's a problem of self-reference. It's an issue seen, too, in Russell's paradox about sets. More famously, the liar paradox imagines a sentence like, "This sentence is false." When you examine it closely, it creates a logical circularity. If the sentence is true, then it's false; but then if it's false, it's true. It's enough to make a robot's brain explode.
Gödel applied a similar logic to the whole system of mathematics. He took the sentence, "This statement is unproven," and converted it into a number statement about numbers (with a code system known as "Gödel numbering"). He discovered that this proposition cannot be proven within that system.
Going even further than this, Gödel concluded that in every system that's rich enough to allow for arithmetic, there will be a proposition within it that cannot be proven by it's own tools. We need some kind of "meta language" to prove the rules by which a system operates. It's a bit like how we can't see our own eyes or draw around the hand that's holding the pencil.
How postmodernists weaponized Gödel
Gödel has been misrepresented, even in his lifetime. For instance, certain postmodernist philosophers used him to say, "There is no truth! Even math is groundless!" They wanted to show how everything was meaningless, and truth amounted only to opinion.
But this isn't the point. Gödel only showed that truth does not always need to be proven. This is, of course, no small thing. To pull apart truth and provability, to allow for "unproven truths," seems highly counterintuitive
Gödel, himself, thought there were objective truths. His theory only went to show the limitations of mathematics but not that it was flawed in any way. He would have hated what the postmodernists made of his work.