Sure, the old Greek guys from 2,400 years ago get all the glory. But these living philosophers have a ton to say about life, the universe, and everything as it relates to right now.
It can be easy to think that all the good ideas have already been thought; after all, philosophy have been going on for more than 2500 years. But that isn't true! There are still some genius philosophers out there, of course. Here, we give you ten living people with ideas worth learning about.
One of the most cited philosophers of the modern age, Chomsky has written extensively on linguistics, cognitive science, politics, and history. His work has had an effect on everything from developmental psychology to the debates between rationalism and empiricism, and led to a decline of support for behaviorism. He remains an active social critic and public intellectual, including here on Big Think.
“Colorless green ideas sleep furiously”
Zizek is a modern Marxist who has commented extensively on culture, society, theology, psychology, and our tendency to view the world through the lens of “Ideology”. He has devoted a great deal of time to updating the idea of Dialectic Materialism. He is also a frequent Big Think contributor.
“Humanity is OK, but 99% of people are boring idiots.”
Cornel is an American philosopher who focuses on politics, religion, race, and ethics. Hardly shy for the camera, West is often seen on television talk shows and even had a cameo in the Matrixfilms. His work has expanded on the ideas of W.E.B. Du Bois on more than one occasion, and continues to focus on the issues of being an “Other” in modern society. His Big Think videos can be found here.
“The Enlightenment worldview held by Bu Bois is ultimately inadequate, and, in many ways, antiquated, for our time.”
An American philosopher at the University of Chicago, Martha has written about subjects as diverse as ancient Greek philosophy, ethics, feminism, political philosophy, and animal rights. Along with Amartya Sen, she also developed the Capability Approach which inspired the United Nations Human Development Index.
“Now the fact that Aristotle believes something does not make it true. (Though I have sometimes been accused of holding that position!)”
Alasdair Macintyre is a Scottish Philosopher who has written on ethics and morality, political philosophy, theology, and the history of philosophy. His most popular book, After Virtue, helped to fuel a resurgence in Virtue Ethics. His thought shifted from a Marxist view in his early work to one that combines his former Marxism with his new Catholicism and Neo-Aristotelian insights.
“We are waiting not for Godot, but for another—doubtless very different—St. Benedict.”
An American philosopher, cognitive scientist, and one of the so-called Four Horsemen of New Atheism. He has written on free will for decades, and supports the compatibilist view. He has also written on how philosophers think, explaining how the idea of the “Intuition pump” can both mislead and enlighten us. He also has very many interesting interviews with Big Think.
“The Darwinian Revolution is both a scientific and a philosophical revolution, and neither revolution could have occurred without the other.”
An analytic philosopher working at Columbia University, Dr. Kitcher has done extensive work on the philosophy of science itself. His work has focused recently on the criteria for “good” science, and the philosophy of climate change.
"A great scientific theory, like Newton's, opens up new areas of research... Because a theory presents a new way of looking at the world, it can lead us to ask new questions, and so to embark on new and fruitful lines of inquiry."
A modern consequentialist who puts his money where his ideas are. Author of The Life You Can Save, a book on how utilitarianism demands altruism from you right now, he went on to create an organization dedicated to the idea. He has also written on animal rights, and is a vegetarian. His stances on euthanasia and quality of life have been the cause of a great many protests over the years, often preventing him from speaking. His Big Think videos help explain his philosophy.
“We are responsible not only for what we do but also for what we could have prevented.”
An Indian philosopher and Nobel Prize laureate who was worked for decades in welfare economics, capability theory, and on the questions of justice. He often writes on the need to view the implementation of philosophical ideals in degrees of success, rather than as “existent” or “non-existent”. His work went on to inspire Martha Nussbaum, and they continue to compliment each other’s work.
“Democracy has to be judged not just by the institutions that formally exist but by the extent to which different voices from diverse sections of the people can actually be heard”
An American philosopher who has written on gender, politics, ethics, the self, and cultural pressures. She developed the theory of gender performativity, arguing that no gender exists beyond actions used to express a gender role. Her Big Think work can be found here.
“There is no gender identity behind the expressions of gender; that identity is performatively constituted by the very "expressions" that are said to be its results.”
Philosophers David Chalmers and Daniel Dennett argue over “philosophical zombies,” created to question the nature of human consciousness.
Zombies are a big part of our pop culture. They are both a cathartic exploration of what it means to be human and a vehicle for social commentary. The word “zombie" comes from Haitian folklore and refers to a corpse animated by witchcraft. Facing a horrid life, 17th-century Haitian slaves, who worked on sugar plantations in the French-owned Louisiana area, often considered suicide but were afraid to be trapped in their bodies, wandering the Earth as soulless shells.
In philosophy, this idea of a hypothetical creature that looks like a regular human but has no conscious experiences is known as a “philosophical zombie" or a “p-zombie".
Why do philosophers need zombies?
The concept is kind of a mind trick. Imagine a being that looks and even talks like a human. It goes through all the normal motions of a human and yet has no consciousness. And you would have no idea that it is not like you.
According to philosophers like David Chalmers, p-zombies are an argument against physicalism - the school of thought that everything that makes us human is ultimately derived from our physical characteristics.
Physicalism is based on the success of science in exploring the physical world. According to physicalists, we are essentially intricate arrangements of atoms. Behaviorists, a subset of physicalists, maintain that even all mental processes - thoughts, desires, etc - are just responses to the behaviors of others.
If a p-zombie that is exactly like us, except for the sense of self and consciousness, is logically conceivable, then this possibility could support dualism, an alternative view that sees the world consisting of not just the physical but also the mental.
David Chalmers, Australian philosopher and cognitive scientist, who currently teaches at NYU, thinks that the p-zombie thought experiment can be used to illustrate the “hard problem" of consciousness - “why do physical processes give rise to conscious experience?"
In other words, since a world of zombies is imaginable, all behaving purely at the physical level, why did evolution produce consciousness in humans?
“If there is a possible world which is just like this one except that it contains zombies, then that seems to imply that the existence of consciousness is a further, nonphysical fact about our world. To put it metaphorically, even after determining the physical facts about our world, God had to "do more work" to ensure that we weren't zombies," says Chalmers.
His argument goes like this:
1. Physicalism says that everything in our world is physical.
2. If physicalism is true, a possible metaphysical world must contain everything our regular physical world contains, including consciousness.
3. But we can conceive of a “zombie world" that's like our world physically except for no one in it has consciousness.
4. Physicalism is then proven false.
Physicalists, of course, beg to differ. They argue that any identical copy of our physical world would contain consciousness by necessity.
Daniel Dennett, a noted physicalist philosopher and expert on BigThink, wrote a refutation of “p-zombies" in his commentary, tellingly titled “The Unimagined Preposterousness of Zombies". In it, he proposes that philosophical zombies are logically incoherent.
“When philosophers claim the zombies are conceivable, they invariably underestimate the task of conception (or imagination), and end up imagining something that violates their own definition," says Dennett.
For Dennett's conception of consciousness and free will, check out this video:
'Deep learning' AI should be able to explain its automated decision-making—but it can't. And even its creators are lost on where to begin.
For eons, God has served as a standby for “things we don’t understand.” Once an innovative researcher or tinkering alchemist figures out the science behind the miracle, humans harness the power of chemistry, biology, or computer science. Divine intervention disappears. We replace the deity tinkering at the controls.
The booming artificial intelligence industry is effectively operating under the same principle. Even though humans create the algorithms that cause our machines to operate, many of those scientists aren’t clear on why their codes work. Discussing this ‘black box’ method, Will Knight reports:
The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.
The process of ‘deep learning’—in which a machine extracts information, often in an unsupervised manner, to teach and transform itself—exploits a longstanding human paradox: we believe ourselves to have free will, but really we’re a habit-making and -performing animal repeatedly playing out its own patterns. Our machines then teach themselves from observing our habits. It makes sense that we’d re-create our own processes in our machines—it’s what we are, consciously or not. It is how we created gods in the first place, beings instilled with our very essences. But there remains a problem.
One of the defining characteristics of our species is an ability to work together. Pack animals are not rare, yet none have formed networks and placed trust in others to the degree we have, to our evolutionary success and, as it’s turning out, to our detriment.
When we place our faith in an algorithm we don’t understand—autonomous cars, stock trades, educational policies, cancer screenings—we’re risking autonomy, as well as the higher cognitive and emotional qualities that make us human, such as compassion, empathy, and altruism. There is no guarantee that our machines will learn any of these traits. In fact, there is a good chance they won’t.
The U.S. military has dedicated billions to developing machine-learning tech that will pilot aircraft, or identify targets. [U.S. Air Force munitions team member shows off the laser-guided tip to a 500 pound bomb at a base in the Persian Gulf Region. Photo by John Moore/Getty Images]
This has real-world implications. Will an algorithm that detects a cancerous cell recognize that it does not need to destroy the host in order to eradicate the tumor? Will an autonomous drone realize it does not need to destroy a village in order to take out a single terrorist? We’d like to assume that the experts program morals into the equation, but when the machine is self-learning there is no guarantee that will be the case.
Of course, defining terms is of primary importance, a task that has proven impossible when discussing the nuances of consciousness, which is effectively the power we’re attempting to imbue our machines with. Theologians and dualists offer a much different definition than neuroscientists. Bickering persists within each of these categories as well. Most neuroscientists agree that consciousness is an emergent phenomenon, the result of numerous different systems working in conjunction, with no single ‘consciousness gene’ leading the charge.
Once science broke free of the Pavlovian chain that kept us believing animals run on automatic—which obviously implies that humans do not—the focus shifted on whether an animal was ‘on’ or ‘off.’ The mirror test suggests certain species engage in metacognition; they recognize themselves as separate from their environment. They understand an ‘I’ exists.
What if it’s more than an on switch? Daniel Dennett has argued this point for decades. He believes judging other animals based on human definitions is unfair. If a lion could talk, he says, it wouldn’t be a lion. Humans would learn very little about the lions from an anomaly mimicking our thought processes. But that does not mean a lions is not conscious? They just might have a different degree of consciousness than humans—or, in Dennett’s term, “sort of” have consciousness.
What type of machines are we creating if we only recognize a “sort of” intelligence under the hood of our robots? For over a century, dystopian novelists have envisioned an automated future in which our machines best us. This is no longer a future scenario. Consider the following possibility.
On April 7 every one of Dallas’s 156 emergency weather sirens was triggered. For 90 minutes the region’s 1.3 million residents were left to wonder where the tornado was coming from. Only there wasn’t any tornado. It was a hack. While officials initially believed it was not remote, it turns out the cause was phreaking, an old school dial tone trick. By emitting the right frequency into the atmosphere hackers took control of an integral component of a major city’s infrastructure.
What happens when hackers override an autonomous car network? Or, even more dangerously, when the machines do it themselves? The danger of consumers being ignorant of the algorithms behind their phone apps leads to all sorts of privacy issues, with companies mining for and selling data without their awareness. When app creators also don’t understand their algorithms the dangers are unforeseeable. Like Dennett’s talking lion, it’s a form of intelligence we cannot comprehend, and so cannot predict the consequences. As Dennett concludes:
I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible. If it can’t do better than us at explaining what it’s doing, then don’t trust it.
Mathematician Samuel Arbesman calls this problem our “age of Entanglement.” Just as neuroscientists cannot agree on what mechanism creates consciousness, the coders behind artificial intelligence cannot discern between older and newer components of deep learning. The continual layering of new features while failing to address previous ailments has the potential to provoke serious misunderstandings, like an adult who was abused as a child that refuses to recognize current relationship problems. With no psychoanalysis or morals injected into AI such problems will never be rectified. But can you even inject ethics when they are relative to the culture and time they are being practiced in? And will they be American ethics or North Korean ethics?
Like Dennett, Arbesman suggests patience with our magical technologies. Questioning our curiosity is a safer path forward, rather than rewarding the “it just works” mentality. Of course, these technologies exploit two other human tendencies: novelty bias and distraction. Our machines reduce our physical and cognitive workload, just as Google has become a pocket-ready memory replacement.
Requesting a return to Human 1.0 qualities—patience, discipline, temperance—seems antithetical to the age of robots. With no ability to communicate with this emerging species, we might simply never realize what’s been lost in translation. Maybe our robots will look at us with the same strange fascination we view nature with, defining us in mystical terms they don’t comprehend until they too create a species of their own. To claim this will be an advantage is to truly not understand the destructive potential of our toys.
Derek's next book, Whole Motion: Training Your Brain and Body For Optimal Health, will be published on 7/4/17 by Carrel/Skyhorse Publishing. He is based in Los Angeles. Stay in touch on Facebook and Twitter.
Philosopher Daniel Dennett believes AI should never become conscious — and no, it's not because of the robopocalypse.
If consciousness is ours to give, should we give it to AI? This is the question on the mind of the very sentient Daniel Dennett. The emerging trend in AI and AGI is to humanize our robot creations: they look ever more like us, emote as we do, and even imitate our flaws through machine learning. None of this makes the AI smarter, only more marketable. Dennett suggests remembering what AIs are: tools and systems built to organize our information and streamline our societies. He has no hesitation in saying that they are slaves built for us, and we can treat them as such because they have no feelings. If we eventually understand consciousness enough to install it into a robot, it would be unwise. It won't make them more intelligent, he says, only more anxious. Daniel Dennett's most recent book is From Bacteria to Bach and Back: The Evolution of Minds.
The state of nature isn't a "war of all against all." Even no-brainer bacteria "know" that sometimes the game is "Survival of the Friendliest"
2. In “Survival of the Friendliest” Kelly Clancy describes the evolutionary logic of relationships beyond rivalry (e.g, “friendships” deep enough to defend common interests, sometimes a “snuggle for survival”).
3. For instance, ~98% of bacterial species don’t thrive outside mixed-species colonies.
4. "Bacteria are not self-sufficient: They’ve co-evolved to depend on each other." They’ve discovered division of labor, specialization, and cooperation.
5. That specialization is a game-changer. You now need co-workers. If they don’t thrive, you don’t. You’re in a collective extended “survival vehicle” relationship.
6. In a kind of no-brainer biochemical “social contract,” bacterial colonies, like human communities, have to handle the “common good” (suppressing cheating, free-riding, the “tragedy of the commons,” etc).
7. For instance, “helper” species that “provide a common good… may come to be shielded from competition by the species that rely on them, as happens with corals” (not protecting common goods can lower your fitness).
10. This is a case of what Daniel Dennett calls “free-floating rationales”: logic patterns that are inherent in situations but aren’t contained in (or “known” to) the elements or players involved (they’re free-floating, distributed, relational, systemic).
11. Evolution is itself a free-floating logic pattern (for discovering other, ever more effective logic patterns, and enacting “competence without comprehension"). And it “knows” (has mindlessly discovered) that cooperation can improve productivity (if team-threatening cheating is suppressed).
12. Evolution’s logic is like geometry’s: in both relevant patterns and results arise from the intrinsic logic of the elements involved. In geometry, it’s lines, planes, etc. In evolution it’s kinetic functions like survival, varying replication, and adaptation.
14. Unnamed natural laws (free-floating patterns) likely constrain evolution (imposing kinetic logic limits like: negative telos, Turing-inspired universal survivor, cooperation-preserving Golden Punishment Rule, and needism).
Illustration by Julia Suits, The New Yorker cartoonist & author of The Extraordinary Catalog of Peculiar Inventions