- All organizations pivoting to generative AI need to ask whether their model hits key benchmarks.
- If the cost of adopting AI is prohibitive, companies can make incremental changes.
- Experimentation with enterprise AI is crucial; not all features result in long-term use and retention.
When Stack Overflow’s traffic apparently went into rapid decline this year, Elon Musk reacted on X with an epitaph: “Death by LLM.” (LLM stands for “large language model.”) His message initially sent a shockwave across the business world as it underlined the imminent threat generative AI poses across a wide swathe of sectors — from accountancy and legal to media and software — but are all these companies really hurtling toward oblivion?
Stack Overflow is the go-to forum for programmers wanting help with coding problems. Musk’s tweet appeared to suggest LLM-based AI programs like ChatGPT were making the site rapidly obsolete as they offered the same information to coders more quickly. However, Stack Overflow’s response not only opened up a more nuanced picture of the AI landscape — it also contained valuable insights into possible enterprise strategies for riding the AI wave.
After clarifying their traffic statistics — down by 5%, not the 30% or 50% reported elsewhere — Stack Overflow acknowledged the potential for AI to cause ups and downs in engagement, adding this caveat: “There is one core deterrent in the adoption of AI-generated content: trust in its accuracy,” it added. The company then announced a “roadmap for integrating AI” into its platforms, including easier conversational searches, attributed and cited responses, and a tagging structure — all of which, they hoped, would increase personalization, accuracy, and trust.
Tellingly, Stack Overflow will now charge large AI companies who want to train on its corpus of data. Reddit, another large forum, also announced it will charge the AI heavyweights to access its data. Other firms are quickly following this lead.
AI is my co-pilot
Most companies have now accepted that AI’s disruptive power — while potentially catastrophic — has opened up many opportunities. According to Goldman Sachs, generative AI could lift global labor productivity by over 1% annually for the next 10 years and boost GDP 7%.
It’s not only content and knowledge-based sectors that have had to respond quickly; drug companies and manufacturers are claiming rapid AI adoption in their research and marketing, for example. Pharma giant Merck is investing huge amounts in AI for drug research, and electronics maker Electrolux says it has pivoted toward an AI-led approach across its business — to improve demand planning, for example.
Rafael Oezdemir, founder and CEO of venture capital firm Zendog Labs, says large organizations are taking a “co-pilot” approach to adopting AI, perhaps adding a chatbot or other AI capabilities around existing products. For example, building an internal AI chatbot could enable a large consultancy to quickly grab intelligence from all past client work, increasing efficiency and freeing staff time. “Achieving that should help them protect their client base,” Oezdemir tells Big Think. “We’re yet to see whether that’s enough [to avoid disruption]. But it’s low-hanging fruit.”
All organizations pivoting to generative AI, says Oezdemir, need to ask whether their model hits three benchmarks: (1) it meets a clear need; (2) it improves unit economics; and (3) it is “defensible,” for instance, by using models and datasets that are unique and hard to copy.
“Computational costs of building AI models are so high, it could reduce your margin to unsustainable levels,” he says. “Are the efficiencies generated enough to counter that and improve unit economics? Many startups are struggling with this expense, leaving them with terrible gross margins. It is costing many large companies too. Investors are wondering what’s here to last. Anyone can compete with you if you are building a model based on ChatGPT, which offers an API that allows businesses to use it within their own products. What you do only becomes defensible when you can create your own specialized model, or train the model on your unique, proprietary data.”
Having a narrow use improves the chances of success, he adds. For example, Retrato uses AI to turn selfies into professional-looking headshots. “That’s a good narrow use. But you must be careful about the data and knowledge you use. One major way to get it wrong is trying to use information that doesn’t belong to you, such as the content of a videoconference meeting. Another is to use AI for things your customers still want a human to do.” Chatbots are a good example. Most customers still prefer to talk to a human.
Experiment with AI and take risks
HubSpot is a mature marketing, sales, and customer service software provider that has been developing its generative AI capabilities, and recently unveiled AI tools including assistants, insights, chat interface, and agents.
Executive vice president of product at HubSpot, Andy Pitre, tells Big Think: “Most marketers say their industry changed more in the past three years than in the last 50. AI will exacerbate these changes, so we must move quickly to ensure our customers are set up to win.”
The firm’s AI strategy links all the data in its customer relationship management (CRM) systems to customer engagement activity. This enables the AI to “know your content, have context about your business, and understand your customers, so you can connect with them better,” says Pitre. HubSpot admits it is still figuring out these new technologies, and uptake has been rapid but patchy. Some features get great activation, but not always long-term use and retention.
“That will be common with early AI uses,” adds Pitre. “But as technology improves, customers will find uses that stick. We haven’t begun to imagine all the coming changes and benefits. Our answer for companies wondering how to pivot with AI is: Get started, try things, experiment. The most successful will take risks, find what works, and learn from what doesn’t.”
Oezdemir says large companies like HubSpot have an advantage because they can plug AI solutions into existing products and distribution mechanisms. And they have internal data assets on which to train their own LLMs for specific uses. But, that’s expensive because you need a data science team to build your own model, and often need to develop multiple models in parallel. If the cost is prohibitive, companies could make small, incremental changes that gradually build their model’s uniqueness.
Avoiding death by LLM
Have firms like Stack Overflow done enough to survive? “To some extent, Elon was right,” says Ruslan Salakhutdinov, professor of computer science at Carnegie Mellon University, and previously director of AI research at Apple. “It’s hard to predict what will happen to Stack Overflow. I don’t think it will go out of business. But whether coders go there or to other AI models for answers, it does mean they will be much more efficient — a team of, say, six coders may be able to do the work of ten.”
Currently, generative AI still has limitations. It still can’t handle complex questions or scenarios — even when taking a fast food order, for example. Large providers are trying to build bigger models on more data to address that issue. There are also problems with data privacy and security, not to mention issue with copyright, bias, toxicity, disinformation, and “hallucination” — wrongly presenting something as fact. The real winners and losers will not emerge until the AI industry solves these wider issues.
Musk’s provocative warning may have been overstated — and typically “Elon” — but his signal to enterprise was clear: Create an AI roadmap now, not tomorrow.