Skip to content
Business

AI is an incredible “intelligence equalizer” — if we use it smartly

Engagement with generative AI is a business essential — but all companies should be vigilant.
A book cover featuring the words "work ddi" and an intelligence equalizer symbol.
'Work Different: 10 Truths for Winning in the People Age' by by Kate Bravery, Ilya Bonic, & Kai Anderson / Wiley
Key Takeaways
  • We all need to get comfortable with the new language of generative AI, to minimize the risks and maximize the gains.
  • Early AI adopters have installed guardrails and instituted checks and balances.
  • Leveraging internal HR data sets to train custom AI models will bolster workforce insights.
Excerpted with permission from the publisher, Wiley, from Work Different: 10 Truths for Winning in the People Age by Kate Bravery, Ilya Bonic, Kai Anderson. Copyright © 2024 by Mercer (US) Inc. All rights reserved. This book is available wherever books and eBooks are sold.

Adapting to new technology requires custom learning experiences based on the different skills, knowledge, and attitudes of a diverse workforce. Some firms are trying and succeeding.

Khan Academy, a nonprofit in the education space, launched a project with OpenAI in early 2020 to power Khanmigo, an AI-fueled assistant that simultaneously supports teachers and tutors students. Chief learning officer Kristen DiCerbo believes the tool helps Khan Academy meet the diverse learning needs of its users in ways that signal the future of learning, noting, “They [our students] all have different gaps. They all need different things to move forward. That is a problem we’ve been trying to solve for a long time.” If tech can get us to more personalized learning experiences through a digital buddy — we’re in!

A major financial services firm also pulled it off, notably because many wouldn’t look to the financial sector as a leader in AI. Like many established organizations, Morgan Stanley faced a knowledge problem: They had a wealth of business insights, but couldn’t figure out how to spread the information in-house. The company intranet was bulging with insights on everything from market research to investment strategies, haphazardly strewn across multiple sites in PDF form.

To make the information more accessible, Morgan Stanley uses GPT-4 to fuel an internal chatbot that finds and delivers the resources employees are looking for, based on what they need. According to Jeff McMillan, the company’s head of analytics, data & innovation, “[This] effectively unlocks the cumulative knowledge [within our workforce].”

And HR is getting its day in the sun, too. In 2023, Beamery launched TalentGPT — Gen AI for HR technology. This leverages GPT-4 and other large language models (LLMs) to redesign the experience of talent acquisition and talent management for users and HR. By personalizing career recommendations based on organizational skills gaps, they are enabling companies to accelerate skill acquisition, tackle their DEI challenges, and address their ethical obligations to boot. 

As [Beamery Co-Founder and CPO] Sultan Saidov commented, “These advances in AI technology are improving the interactions we can provide to our users, and how much time we can save people in achieving complex tasks.”

Smart working is getting smarter

The economist Richard Baldwin commented at the World Economic Forum’s Growth Summit that “AI won’t take your job, but someone who can use AI better than you, just might!” And this is the real challenge people are facing. It is less about intelligence, per se, but how we can bolster our thinking by working smartly. Having smarts — knowing where to go or how to acquire what we need — has the potential to be an intelligence equalizer. 

Therefore, we all need to get comfortable with the new language of Gen AI, to all appreciate how AI and automation will disrupt how we work, and to ensure we have guardrails to maximize the gains while minimizing the risks.

Try Big Think+ for your business
Engaging content on the skills that matter, taught by world-class experts.

One emerging challenge is to ensure that our workforce knows how to discern what’s real and what isn’t — how to cut through the rubbish. (And there is, as we’re all painfully aware, a lot of rubbish out there.) According to an Oliver Wyman report, “Despite having low trust in the accuracy of [information on social media], our Gen Z colleagues prioritize social media’s familiar faces [and] entertaining content” over the news sources they think would be more credible. In other words, Gen Zers know that social media is feeding them garbage, yet they’re still eating it. 

We all need to be more vigilant in checking the facts, details, and sources behind any AI-generated content before incorporating it into our work and being clear about which co-workers — bot or human — are contributing to our work and any potential copyright violations.

Drifting through an AI-driven world without getting engaged with this technology is a recipe for disaster, as a fool with a tool is still a fool.

Speaking of vigilance, how many employees do you think might have inadvertently shared private or confidential information with ChatGPT via their company computer? Probably more than you’d hope. Most companies addressed the issue early on, thanks to diligent risk and compliance personnel, but the threat is here to stay. Learning how to amplify intelligence and how to do it safely are now onboarding imperatives for us all. This is an evolving space, but here is how some early adopters tackled it:

Instituting guardrails. Striking the right balance between defining guardrails and making it easy to share learnings and promote safe experiments is key. Set up your safeguards and controls in advance of new tech coming in. This includes risk management strategies, data policies, security training, algorithm audits, and an ethical AI credo that puts people before the tech, not the tech before the people.

Installing checks and balances. Some are using pass/fail courses to certify employees before giving them access to certain tools. Others are setting up alerts or disclaimers for internal content downloads. An imperative is making certain that your country’s data protection regulations and related laws are being upheld and that updates are being regularly shared. Having people who are dedicated to the use of Gen AI with a focus on privacy and ethics, and who can guide others in the firm on these issues, will help to navigate change.

Clarifying the impact on jobs and skills. A number of companies have started to actively analyze the roles within their organization to see how this new technology will impact jobs. Deduce how AI can help with skills intelligence and strategic workforce management to meet emerging, evolving business needs. Acquiring technical and analytical skills that help with sense-checking AI outputs, and building a positive digital work culture, are also key to enterprise-wide adoption.

Identifying the impact on operations. Just as we’ve employed robotic process automation (RPA) for menial tasks, we now need to consider how cognitive processes such as conceptual reasoning, divergent thinking and evaluative thinking can be enhanced. Augmenting our own intelligence to improve decision-making, solve analysis paralysis, validate our hypotheses, and facilitate content creation are all part of the new frontier. Learning with computers to solve more complex challenges will come next.

Subscribe for a weekly email

Bolstering people insights. Assuming you have the right approvals and keep data anonymized, leveraging internal HR data sets to train custom AI models that meet the specific needs of your business will bolster the insights you have about your people, allowing for greater predictions about their health and behavior and, in effect, solving the issue of lagging human capital metrics to inform decision making. This allows for more individualized and targeted people management interventions. 

This over-reliance on technology could one day dilute our insights, compromise our critical thinking skills, and leave our decision-making at the mercy of the machines, as our own critical reasoning will have atrophied. This is the dystopian future we need to avoid, because it could lead to — say it with us, now — “The Robot Uprising.”

At the same time, drifting through an AI-driven world without getting engaged with this technology is a recipe for disaster, as a fool with a tool is still a fool. Any company or individual who doesn’t effectively use AI might be less productive — and less competitive — than those who do.

Unlock potential in your business

Learn how Big Think+ can empower your people.
Request a Demo

Related