Skip to content
The Future

What happens the day after humanity creates AGI?

“We are racing towards a new era in which we outsource cognitive abilities that are central to our identity as thinking beings,” writes computer scientist Louis Rosenberg.
A silhouette of a person stands facing a wireframe digital figure on a purple patterned background.
curto / adobe stock / Sarah Soryal
Key Takeaways
  • Predictions about the impacts of artificial general intelligence (AGI) often focus on societal disruptions, such as job displacement or the rise of AI-driven manipulation.
  • But computer scientist Louis Rosenberg argues that AGI’s most profound effect may be more personal: a philosophical identity crisis.
  • If we choose to thoughtlessly outsource our thinking to AGI, Rosenberg argues we’ll be fundamentally undermining what it means to be human.
Sign up for Big Think on Substack
The most surprising and impactful new stories delivered to your inbox every week, for free.

Over the past few weeks, I’ve been asking people in my circle of AI professionals if they are mentally prepared for artificial superintelligence. They tend to shrug and express various worries: potential impacts on the job market or the threat of AI-powered misinformation. They also mention the potential upsides, like the ability of superintelligence to help us cure diseases, revolutionize clean energy, unravel the mysteries of the Universe, and maybe even bring about world peace. 

In other words, they have never really considered what life will be like the day after an artificial general intelligence (AGI) is widely available that exceeds our own cognitive abilities. From knowledge and expertise to planning, reasoning, creativity, and problem-solving, we could soon find ourselves thoroughly outmatched. This is a genuine possibility in the very near future, and nobody I know is honestly confronting the profound (and potentially demoralizing) impact it could have on our identity as humans. 

I apply this criticism to myself as well. I have been writing about the dangers of superintelligence for well over a decade, and I, too, have focused on specific tactical risks such as the AI manipulation problem and the arrival mind paradox. At the same time, I have failed to honestly confront what life will really feel like when we humans collectively realize we have lost cognitive supremacy on planet Earth and will likely never get it back. 

No, that last statement is not personal enough. 

What I failed to confront is what my life will really feel like when I am standing alone in an elevator — just me and my phone — and the smartest one speeding between floors is the phone. When you let yourself consider this, you realize that the biggest impact on humanity will not be the looming upheaval in the job market or the dangerous new AI technologies that we will undoubtedly use to manipulate each other.

No, the biggest impact on humanity will be the identity crisis that hits us like a robotic punch in the face, stunning us into a new reality where the AI in our pockets (and soon, embedded in our glasses, earbuds, or pendants) can solve any problem we encounter in our daily lives and do it faster, smarter, and more creatively than we could do ourselves.

In this new reality, we will reflexively ask AI for advice before bothering to use our own brains. And in many cases, we will not even need to ask — the guidance will just stream into our eyes and ears. That’s because the AI in your wearable devices will have access to onboard cameras and microphones, enabling them to see and hear everything that you see and hear and thereby track the full context of your life in real-time. They will also compile a historical record of your actions and reactions, enabling them to accurately predict your behaviors and emotions in almost every situation, anticipating your wants and needs before they even surface in your mind. 

In other words, these context-aware AI assistants will give you advice without you needing to say a word. As you walk down the street, the AI might see a jewelry store up ahead and remind you that your 20-year anniversary is coming up and you need a gift for your wife. It will then help you pick out something she will love, as the AI will know her tastes better than you do. This will change who we are, how we live, and how we relate to other people.

The first time I wrote about the profound societal impact of context-aware AI agents was in my 2012 graphic novel UPGRADE. I called these agents “Spokegens” because they were AI-generated spokespeople who assist us and guide us, while also manipulating us on behalf of paying sponsors. What I failed to realize is that the biggest risk of context-aware AI agents is not the manipulative impact of deliberative influence.

The truth is, even if these superintelligent assistants are banned from overt manipulation, the very fact that they can outthink us could undermine our sense of agency and subvert our sense of self.

Imagine you have a superintelligent assistant sitting on your shoulder, observing your life and whispering advice into your ears. This will feel like a superpower at first, but a grand digital reckoning will surely follow as we all gradually realize that the voice in our ears is smarter than our own internal monologue. What happens when we trust the voice in our earbuds more than the voice in our heads? I worry we will become willing puppets to our AI assistants, not because of some nefarious plan to control us, but simply because they will feed us a stream of advice that outmatches anything we could come up with ourselves. 

I know many readers will push back, insisting they will never favor the AI voice in their ears over their own internal thoughts. I’d like to believe that, too, but I suspect I’m not the only idiot who has followed my GPS to the end of a dark alley even though I knew things didn’t look right. And GPS is far from superintelligent. It’s just a navigation tool that replaces your need to look at a map. But relying on a superintelligence that can out-reason you, out-plan you, out-negotiate you, and do it all more creatively than you ever could — that will surely hit at the essence of what it means to be human. How could that not feel demoralizing?

We are racing towards a new era in which we outsource cognitive abilities that are central to our identity as thinking beings. Some experts believe this will make us feel smarter and more capable, viewing it as augmentation, but that’s not the only way this could go. This age of “augmented mentality” could easily make us feel smaller, less confident, and less consequential. It could even undermine our personal relationships. For example, when your spouse gives you a gift, you will wonder if their AI picked it out for you. You will even wonder if the words they say as they hand you the gift are their own sentiments or if they’re being coached by their ever-present AI (see short film, “Privacy Lost,” for fun examples).

I raise these concerns as someone who has spent my entire career creating technologies that augment human abilities, starting with my early work developing augmented reality for the US Air Force decades ago, to my recent AI work on collective superintelligence. I believe technology can profoundly enhance who we are, greatly expanding our physical and mental abilities while also keeping us human. 

Unfortunately, there is an increasingly fine line between augmenting ourselves and undermining ourselves, or even replacing ourselves. Unless we are thoughtful in how we deploy context-aware AGI assistants, I fear we will cross that line. With this risk now racing toward us, I ask AI researchers and entrepreneurs to think past that fateful day when we collectively create AGI and focus on what it will feel like to be human the day after.

Sign up for Big Think on Substack
The most surprising and impactful new stories delivered to your inbox every week, for free.

Related

Up Next