Skip to content
Who's in the Video
Michael Schrage examines the various roles of models, prototypes, and simulations as collaborative media for innovation risk management. He has served as an advisor on innovation issues and investments to[…]
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

What if AI stood for ‘Augmented Introspection’ as well as ‘Artificial Intelligence’? We’ve been given a precious do-over opportunity in this emergent time of AI technology, where we can choose to re-design our cities and our selves to align more closely with what we want those things to be. So — what do we want it to be? Michael Schrage, MIT research fellow and innovation leader, thinks we need to push past the base-level notion of AI servants and assistants. What individuals need to succeed economically and personally are digital tools that can augment (or suppress) our selves — that’s right, plural. Schrage’s vision of AI is informed by theories of mind developed by cognitive scientists and behavioral economists like Daniel Kahneman, Marvin Minsky, Robert Kurzban and Jonathan Haidt. “According to empirical scientific research, there’s no such thing as ‘the self’. In fact the metaphor that many people use is that your mind is like a committee, and that depending upon time of day and your mood… One self or aspect of the self may dominate over another,” says Schrage. So what aspect of yourself do you most want to enhance and what aspect of yourself do you want to mitigate? AI will help you do that. It will not, however, be a passive pushover that bends to your flaws: great AI, says Schrage, will “kick your assumptions in the groin.” Take the example of an online book recommender. A truly intelligent and introspective tool will not just show you books that echo what you’ve read in the past, it will suggest books that are completely outside of your wheelhouse. It will not simply serve you, it will stretch your thinking. Michael Schrage’s most recent book is The Innovator’s Hypothesis: How Cheap Experiments Are Worth More Than Good Ideas.


I like to say that the real future of AI is not artificial intelligence but augmented introspection. All of these different theories of mind and theories of self create opportunities for redefining what we want augmented introspection to be.
Look at Eric Byrnes transactional analysis: parent, child, adult. Different ways of thinking about the self. Freud: ego, id, super ego.

Let's use the enormous research that has gone on in psychology about the nature of the self,
The nature of the self as in the individual and the nature of the self as the self relates to other people, and let's use technology to create versions of the self that allow for more efficacious, more effective, more creative, more innovative, more productive interactions.

I think it's important to draw a distinction between a cognitive tool and an enhancement and a multiple self or a version of one's self that amplifies, exaggerates, improves, enhances, augments some attribute that really matters to you. So here's a simple way of thinking about this. We have Google Maps. It tells you a way to go. You're in a hurry. You're in such a hurry that you're prepared to dare I say cut corners and go through a stop sign or two. That's your bold impatient self. Well, should Google Maps make a recommendation that supports your bold impatient self?

Or you're doing research on Amazon, you're looking for books to read on a certain subject, you don't want the mainstream books that you, the typical average normal you ordered, what about the you that looks to challenge your fundamental assumptions, to stretch your thinking? What if you had a recommendation engine that recommended stuff for you to read that pushed your thinking? That challenged you? That kicked your assumptions in the groin? That's a different kind of a tool.

Now, what do these things have in common? The self is about agency. It's about choice. You, however you want to define it, choose. What I'm saying AI, machine learning and recommenders and smart recommenders bespoke customer recommenders make different is that you can pick different frontiers, different boundaries of yourself. What aspect of yourself do you most want to enhance and advance? What aspect of yourself do you want to mitigate or suppress? I think this cuts both ways. I think we're going to see versions of the self that allow for greater compassion, less impatience.

So I think it's important to think of the self in a symmetrical way. What are the aspects we want to improve and augment? What are the aspects that we wish to mitigate or ameliorate? What's the common denominator? Give people the choice. This is why I like recommendation engines so much. They're not called you have to do this engines, they're not you have to buy or watch this engines, they're recommendation engines and you choose. What I'm proposing is we're choosing from the span and portfolio of multiple versions of who we really are or who we might become if that's what we want.

What I have found so fascinating in this is there's so much exciting and compelling and groundbreaking work in psychology, sociology, behavioral economics about what is the self, who is the self. There's Nobel Prize winning work from Daniel Kahneman thinking fast and slow about the immediate self and the more thoughtful slow process self. There's Marvin Minsky, the late Marvin Minsky one of the pioneers in artificial intelligence who talked about a society of mind and the modular brain. There's Robert Kurzban. There's Jonathan Haidt. There's social psychologists and anthropologists who are looking at who is the self. “What's the difference between one's self and one's identity?”

And you know what the common denominator to all of this is? You are not you. There is no such thing, according to empirical scientific research, there's no such thing as the self. In fact the metaphor that many people use is that your mind is like a committee, and that depending upon time of day and your mood and how your brain, we're talking about brain and mind is working, you might make one kind of decision or choice versus another. One self or aspect of the self may dominate over another.

There's a wonderful essay by, again Nobel laureate Thomas Schelling on the challenge of command, personal command, self command and he talks about all of these competing selves as to whether you will follow this advice, go this pathway, honor a commitment, et cetera.

My insight is let's use these different dimensions of self as a design opportunity. Let's look at the society of mind and say what kind of digital tools should support this kind of society? Daniel Kahneman thinking fast and slow, what kind of nudges do we want to support the faster brain versus the slower one?

Bits and slices of this are already becoming a reality. It's easy to imagine scenarios where people who care about self-improvement, dare I say selves improvement, look at their technologies and say I want to be a better leader. I want to be a better manager. I want to be a more persuasive or influential designer.

We begin with tools that enable to do that, but then we say well if I keep using these sorts of tools in these sorts of ways I become a different self. What does that self look like? What kind of digital avatar or simulation or scenario can I create to see if that's the kind of self I should be. And that ties into the point about agency. I want to see what my most compassionate self looks like. I can use a textual analysis, sentiment analysis to become more influential in my emails, in my chats, in my presentation, but does my becoming more influential mean that I'm also becoming more manipulative? I don't want to be manipulative. These are the kinds of challenges that augmented introspection create. Who do we want to be? Who do we want to be?

What kind of people do we want to be? The reason why I focus so much on employees in the workplace is that we live in a time where we're not just concerned about losing our jobs to smarter machines, we're worried about how do we create value? How do we become more productive? How does ten years of experience not be one year of experience ten times?

We've already seen a complete revolution of digital tools. We're already seeing the first and second generation of bots and agents coming into the workplace. Completely understandable. Not enough. Not enough. Self-improvement is going to come from selves improvement.


Related