AI is moving at breakneck speed — if you’re not paying attention, you’re already behind, and at risk of extinction. IMB Business School professor Michael Watkins highlights how critical it is to stay afloat in this era of rapid technological evolution, and surf the ever-changing tide of AI.
MICHAEL WATKINS: One of the things that's just so fascinating about AI is how rapidly things can change. At the start of 2025, the major players, OpenAI, Microsoft, developing these new models, thinking they're ahead of everybody in the world. And then, bam, this little startup from China called DeepSeek introduces a new model as good or better than any of the existing models out there. And in a day, a trillion dollars of valuation of those companies is gone.
"Yeah. John, we've got a bit of a tech sell-off this morning, and it's being caused by earth-shattering developments in the AI space."
An enormous uncertainty about, you know, the progression and profitability of the industry rises. And so I think it's just an illustration of how rapidly the landscape can just shift in very fundamental ways, and we have every reason to expect that that's going to continue to happen.
When I talk about AI, I often put up two slides. The first is the dinosaur admiring an incoming asteroid, and the second is a surfer surfing a very large wave. You cannot afford to be the dinosaur. You must be the surfer. You need to be agile. You need to be adaptive. You need to embrace these systems and try out what's going on because if you don't understand it, you really can't hope to help your organization adapt and stay in front of this momentous, world-changing technology.
I'm Michael Watkins. I'm a professor of leadership and organizational change at the IMD Business School. I write extensively about leadership, about strategic thinking, and about the impact of AI on both.
So, of course, in November of 2022, generative AI exploded on the scene. But I think it's important to realize that artificial intelligence has been around for a lot longer and has gone through a very interesting and important evolution. Previously, artificial intelligence was mostly about analytics, machine learning, pattern recognition. Largely, that's been kind of invisible in some sense to most people except in how they use the tools, like the kinds of things that power your Netflix recommendations or your Amazon selections. But, of course, then generative AI really changed everything. And generative AI, as the name indicates, is about generation. It's about content generation. It's about creativity. It's about writing. And it's really changed pretty fundamentally how I think virtually everybody doing white collar jobs is doing their work these days.
And of course, AI continues to evolve. The most recent models are known as reasoning models. As the name indicates, these models are capable of what's known as chain-of-thought thinking. They're reasoning their way through problems step by step in a way that previous generations of LLMs simply were not able to do. And that has enormous benefits for things like solving problems in science and mathematics, planning things becomes possible in a way it wasn't before. And this is again going to generate an enormous change in how the technology is used in business and beyond.
So now the cutting edge is really evolving towards what's known as agentic AI. And that's AI as agents. Right? Agents out in the world doing things that are important for us. That could be, you know, booking your next flight for your vacation or deciding what you need in your refrigerator to feed yourself for the next couple weeks. And this is really an important evolution because it begins to make AI very active in the world in ways that are going to both, you know, generate tremendous benefit, but also potentially create some significant risks. As soon as you see AI beginning to be able to think and reason and then begin to make recommendations and decisions based on that thinking and reasoning, you open up the possibility for serious misuse, serious misguidance, potential for misinformation, even the ability of these systems to influence us in ways that we don't fully yet appreciate or understand.
Today, the way we interact with AI mostly is through what's known as prompting, which essentially boils down to asking the AI either in writing or verbally to answer certain kinds of questions for us. As you're learning to prompt these systems, there's a few limitations and watch-outs you should be keeping in mind. The first is what's known as the hallucination problem, which is the tendency for these systems, if they don't have an answer, not to say, "Hey, I don't know," but instead to give you some fact that turns out to be completely erroneous. When I started using these systems to write articles, for example, I would ask it for some references, and it would very confidently give me three or four references which, when I checked, did not exist. Now you can trust their creativity, you can trust their ability to help you generate ideas, but when it comes to facts, you are well advised, very well advised, to be double-checking almost anything.
A second watch-out is these systems are programmed with personalities that are intended to be helpful. They're going to tell you nice things. They're going to be complimentary to you. They want to make us happy. And you've almost got to work kind of a little bit hard to say, "Don't try to make me happy." And you can imagine the vulnerabilities that that raises, right, for all kinds of high criticality applications where we need to be pretty certain that the right kinds of things are happening. And then that leads to a third big watch-out, which is they seem to have a programmed desire to think that everything is going to be okay. I was writing an article recently about the potential employment impacts of artificial intelligence. And, you know, I asked it a question, "What do you think is going to happen with employment?" "Oh, it'll probably be okay if we do the following things." And then I said, "Well, no, wait a minute. Are you being over-optimistic? And if so, stop." "Well, it's going to be a disaster. There's going to be probably, you know, loss of three or four jobs for every job that's created." And so understanding that there is this seeming inherent bias in these systems towards a kind of optimistic set of outcomes, especially regarding the impact of AI on the world. And again, you've got to be very precise and very thoughtful to ask the system directly to debias itself, to give you the real truth as it sees it. And even then, you can't necessarily be sure that that's absolutely what you're going to get.
As you think about learning to be a great prompter of AI, it's a bit counterintuitive because it's quite different often than how you would interact with another human being. AI has no ability to understand the broader context in which the question is being asked. It has no ability to interpret the emotional resonances that you're trying to communicate. This is particularly true with the reasoning models because you're asking them to go through a chain of thinking. And if you start at the wrong point, it's rapidly going to head off in directions that are not going to be all that useful. So especially for the reasoning models, giving it lots of context, lots of specificity, being as precise as you possibly can from the outset is absolutely the way to go.
You personally need to engage actively with this technology, and it's not enough that you just have some basic literacy. You need to be experimenting with new models as they come out because only by doing that are you going to learn what the capability is and begin to track what's happening as the capability continues to advance. We have to be vigilant about the potential downsides, and we have to be rapid to embrace the upsides. And above all, you have to be the surfer, surfing the wave as things go forward. It's the only answer that I've come up with for how to continue to thrive in a time of really extraordinary change.