Skip to content
Who's in the Video
Brendan McCord is the founder and Chair of the Cosmos Institute and a key thinker at the intersection of AI and philosophy. In the private sector, Brendan was the founding CEO[…]
In Partnership With
Cosmos Institute
Sign up for the Cosmos Institute Substack
A network of thinkers and builders advancing human flourishing in the AI era.

Artificial intelligence is transforming our world, prompting us to revisit fundamental philosophical questions about human existence and purpose. In this interview, Brendan McCord, founder of the Cosmos Institute, examines how philosophical insights from thinkers like Aristotle, John Stuart Mill, and Alexis de Tocqueville can inform our approach to AI, ensuring it enhances rather than undermines human flourishing.

McCord outlines three crucial steps to align AI with the principles of autonomy, reason, and decentralization. By drawing on ancient wisdom, we can navigate the complexities of modern technology and create a future where innovation and human values coexist.

Reflecting on the transformative ideas from Copernicus to Turing, this interview offers a roadmap for finding our place in the cosmos amidst the AI revolution. Explore how we can build a society that prioritizes human potential in the age of technology.

We made this video in partnership with the Cosmos Institute, a network of thinkers and builders advancing human flourishing in the AI era.

BRENDAN MCCORD: AI has opened a new continent, and humanity is setting foot on its shores. Since humans have walked the planet, we've always been the most impressive, intelligent beings. But today, AI is forcing us to ask the question: To live a flourishing human life? Major moments in science and technology shed new light on long-standing philosophical questions. Copernicus dislodged us from our central position in the cosmos. Charles Darwin and his theory of evolution situated us among the animals. And Albert Einstein and his theory of relativity revealed the limits of our senses and our bare intuition. Now, in the age of Turing, technology pushes us to philosophy once again. Philosophy offers not godlike wisdom, but a powerful set of tools of inquiry to move past dogma, past tribalism, and navigate the journey ahead. And the stakes have never been higher.

For the past 200,000 years, humans have created technology as tools, as means to our ends. But today, we're creating technology in AI that has the potential to shape both the means and the ends of human endeavor. The printing press never determined what was printed, but today, 20% of human discretionary time is mediated by algorithms that do determine what information is consumed and even how we decide what's good for us. The upside potential of the technology is staggering. Consider cancer. Today, a doctor can feed medical images to AI, and AI can detect tumors. Tomorrow, AI may even be able to expand our fundamental knowledge of cancer, acting like a human scientist would. And maybe even much better.

But what happens when the technology evolves from a tool to an overseer? What happens when it starts to substitute for our most essential human capacities? One risk is that for the sake of incremental convenience, we offload big aspects of our existence to a kind of superintelligent schoolmaster that tells us what to think and what to do. Another is that we create a governance structure to control the technology that ends up doing the same. How do we realize the benefits of AI while protecting the active use of freedom, that precious gift of modernity, that allows us to realize what makes us essentially human?

Today, there are two primary philosophies in the AI field. The first is existential pessimism, or the view that AI poses a dire risk to humanity. This school wants to hit the pause button. But imagine hitting pause before Darwin, before Einstein, or at any point in human history. The losses to humanity would have been incalculably large. In an open society, hitting pause is as impossible as it is unwise. But what is possible, and even likely, is that the quest to optimize the world for safety ends up creating a governance structure that suffocates innovation, harms freedom, and amplifies bad actors.

The opposing school, accelerationism, embraces the power of markets and the role of optimism. But some of the leaders of this movement neglect to put the human at the center. For them, technology shifts from being a means to being the sole end of human life. Some of these individuals want to accelerate towards the moment when humans pass the baton to AI as the next link in the evolutionary chain. Now I understand the appeal of saving humanity from extinction. I understand the appeal of building God, but both of these philosophies are misguided. What we need is a genuinely positive, humanistic vision for the future of AI.

Here are three steps to get there: Step one: The North Star. Just like the north star gives you direction, any new AI philosophy needs to be oriented around the goal of human flourishing. Human flourishing here means and applying them to become the person that you aspire to be. We need AI and its governance to serve humans rather than humans serving it. Step two: The Compass. A compass is a tool to help us navigate. To help us make decisions along the way. There are three points on the AI compass. One is human autonomy, two is reason and three is decentralization. Human autonomy is the state of being free in our mind and interactions. Aristotle taught that autonomy was essential. Without autonomy, we shift from free agents to passive agents, who are increasingly dependent on algorithms to tell us what to do and how to think. Reason is the superpower we use when our mind considers alternatives in pursuit of the truth. John Stuart Mill fought for the widened access to diverse and competing opinions, because he thought it was essential for the pursuit of truth and the cultivation of reason. If AI causes reason to atrophy, humans will slip into dogmatism, intolerance and persecution. And fields of endeavor that depend on reason like science will stall. Decentralization is like having millions of mini captains instead of one big boss at the top. When Alexis de Tocqueville stepped on the shores of America, he saw a nation characterized by spontaneous association of individuals who, from the bottom up, were rallying together around common interests. If we lose decentralization, then individuals will succumb to the strong centralizing forces in society. Whether that is majority opinion on social media, state control, or a superintelligent master planner.

Step three: Navigating the New World. How do we use the north star and the compass to start navigating this new world? To build this philosophy to law a pipeline. Today, we need to build a philosophy to code pipeline. We need to bring practitioners together with philosophers to translate these vital concepts of human autonomy, reason, and decentralization into the planetary scale systems shaping our future. To do this, we're helping create the first AI lab in the world dedicated to translating the principles of human flourishing into open source software. It's called the Human-Centered AI Lab at Oxford University. Top technology talent, more than ever before, needs to build systems that are animated by this north star of human flourishing. But this requires cultivating a new kind of technologist, a technologist that has world class AI talent, on the one hand, and also deep capacity for philosophical thought on the other. Then we need to support those individuals who can build provocative new prototypes to show us how AI and human flourishing come together.

We're at a pivotal point in human history. Taking first steps into the unknown. If we follow the north star, if we use the compass, we can settle on this new continent. We can build a vibrant society from the ground up. And we can ensure that technology amplifies human potential and doesn't diminish it. From Copernicus to Turing, it's time to once again find our place in the cosmos.


Related