Skip to content

Why the only way to ride the company AI wave is experimentation

AI is both a tool and a catalyst — and the key to successful integration is to rewrite your rule book and tinker.
An illustration of a chess board with blue and orange dots, showcasing the AI wave.
Steve Johnson / Unsplash+ / Big Think
Key Takeaways
  • The contours of AI’s abilities will become known only after you start using it.
  • You can’t just entrust a few architects with the “new way of doing things.”
  • To make best use of AI try to find ways of “stacking” to solve your particular problem.

Sometimes I think back to what life was like BC, before ChatGPT. Things that seemed impossible then, things we thought of as science fiction, things many were sure couldn’t be done. A large number of those things are now reality.

We finally have a machine that can see, hear, and which we can actively talk to about anything for hours on end. It can teach us and code for us and write for us and quiz us. It can paint and draw and create 3D art or even entire games.

Naturally, the market responded. We are increasingly finding ways to make use of this, and to make ourselves more efficient. Among the first jobs under threat were copy editing and graphic design. There’s also talk of generative AI applying to paralegal and research jobs, and there’s even discussion of how it will affect sales jobs inside Google.

What we’ve made is the first “fuzzy” processor, where outputs aren’t rigidly predictable based on the inputs, unlike everything we’ve known in software so far. It can take any data in any format, and transform it into any other. Text to image. Image to movie. Movie to text. Speech to code. Code to diagram. Code to code in a different language. You name it, it’s now possible. And this changes almost everything we think we know about what the corporate world should look like. 

Actively engage with AI

Large language models (LLMs) fall somewhere between a probabilistic piece of software and a brand new employee who doesn’t know anything about your company. Which means pretty much the only way to test whether they’re useful is to try them out. To experiment. There are plenty of evaluations and tests and leaderboards, but none of them are good enough to tell you who to hire. The contours of its abilities are still unknown, and in most cases these contours will only be known after you start using them. Researchers call this a jagged frontier.

It’s like the cloud or mobile or the internet: All the possibilities in the future are inchoate, and will require redesigning what you have. Three-year transformations aren’t going to help. The way to make this work within organizations is hard, because by definition this doesn’t easily fall into someone’s bucket of responsibilities. You might not get immediate ROI, but the costs are real.

Try Big Think+ for your business
Engaging content on the skills that matter, taught by world-class experts.

You can’t just entrust a few architects with the “new way of doing things.” Instead the only way to generate value from AI is to have people across the organization — in all siloes and in all domains — actively engage with it. A Harvard study alongside Boston Consulting Group, found that using AI improves the work for their consultants rather substantially, across task types.

Consultants using AI finished 12.2% more tasks on average, completed tasks 25.1% more quickly, and produced 40% higher quality results than those without. Those are some very big impacts. 

AI seems to even out skills across the organization, upskilling the least skilled folks and leveling the playing field. And despite the benefits the study also found areas where using AI made the output worse: where recipients were less equipped to ask the right questions or use the advice that AI gave. This, again, is the jagged frontier. It’s really difficult to know a priori the best way to use AI.

A new way of doing things

If you think about the fact that LLMs are an anything-to-anything converter, then the biggest benefit is that it allows you to retool everything. It allows you to do whatever you’re doing, much better, faster and cheaper. You can have drastically smaller teams producing output equivalent to those made by 10 times the employees elsewhere. It’s an enormous Archimedean lever! Once it works, that is. If it works.

Every organization has an enormous number of software systems it uses to run daily. These systems are kept in sync mostly by fragile humans who interpret the output of one to be the input for another. This will shift, and for it to work you need to see a new way of doing things. 

Don’t just think about “processes” which need to be mapped or “schemas” that need to be designed, but instead about how you can train or use an AI to do things for you. To think as an employee would. It will mean a large chunk of resources need to be redirected. 

Large language models fall somewhere between a probabilistic piece of software and a brand new employee who doesn’t know anything about your company.

So how might you actually try to do this? One way to get started is, well, to start with the easiest parts, for example: writing text and reports, especially internal ones; creating marketing materials or slide design; writing code to glue things from one system to another. You could test out GPT-4 and Claude 2 for some of these, and see which parts they seem to succeed at, followed by, for instance, Mistral so you know how to use open source models and combine them to get the actual pipeline you want. You could try Stable Diffusion for images. 

We’re so early it’s still extremely unclear if it makes sense to think of the use cases like the cloud, or like another API provider, or like a processor. And the answer might well differ for each use case. 

Keep your options open

Contrary to almost all software wisdom we’ve learnt over the past two decades, with LLMs it’s much easier to go more general first in capabilities, rather than more specific. You can’t start in a niche if you’re training your own model: there might not be enough data, and the model might not learn enough logic. It’s much better to do broad pre-training and then fine-tune to your context. It’s easier to teach logic and reasoning first, like sending it to university, rather than memorization of new domain-specific facts. This is true whether you’re talking about microlithography or additive manufacturing or target protein identification. 

In fact the true surprise of Gen AI is that it took creating an AI which can do almost everything before it got good enough for the few things it is getting used for.

So the way to make use of AI is finding ways of stacking it to solve your particular problem. This means you need to try for more audacious programs without clear applications and “tinker.”

The best way to play the situation is to keep your options open. If you’re building things, use open source as much as possible: They keep getting significantly better, and they’re almost as easy to use as the existing incumbents. Big Tech’s adoption of the technology has happened at warp speed, and they will be winners here regardless of what happens, but when making decisions on what to build for your use case, they’re not trying very hard to be flexible.

This means the question is not about finding ever more specific use cases and then identifying the easiest way to solve them, but rather committing to creating something that could answer many of the questions you have, and then trying to coax them to get better at answering questions.

The reason this is hard is because it’s the exact opposite of how we deal with software. You can’t get precise requirements or user feedback first before knowing what to build. You can’t create unit tests that are specific or easily measurable first. Going narrow first is a surefire way to fail. The only way is one of exploration.

Restructuring the business of business

We shouldn’t just plug AI, LLMs or otherwise, into our current systems and hope it will work; we should use it to build better ones. But there’s good news. This is unlikely to be an area where there are a couple of clear winners you can seamlessly deploy in a year or two. It’s likely to remain a place where you win based on the combination of technologies you bring in, the way you mix them together, and the way you make them work within the existing organizational construct. 

Subscribe for a weekly email

Which means the fact that you know your industry or process or niche better than anyone else will remain a competitive advantage, assuming you use it to build something brilliant. You’ll never get a perfect solution that can solve your problems and increase EBITDA with no effort, no matter what the software vendors promise, so you need to build those muscles yourself, just like with cloud or with mobile or with the internet.

We shouldn’t just plug AI, LLMs or otherwise, into our current systems and hope it will work; we should use it to build better ones.

AI has a duality. It’s both a tool and a catalyst. It takes over routine tasks, but only by helping reimagine those tasks as they exist today. It democratizes a large amount of expertise and helps juniors level up, but has hard edges that aren’t obvious when they don’t work. Recognizing this is the first step to a restructuring of what the business of business might look like. 

The key question is not which roles it can do or what jobs are gone forever, it is what’s the price of expertise being made free?

Unlock potential in your business

Learn how Big Think+ can empower your people.
Request a Demo