Skip to content
Business

Got “technostress”? How to process AI before it processes you

The mindless implementation of AI tools can come at a cost for our teams. Here are some red flags and solutions.
A blurred profile of a person's face appears against a vibrant yellow background representing an abstract sense technostress.
DTS / Nick Fancher / Big Think / Vincent Romero
Key Takeaways
  • If AI is implemented poorly, it can adversely affect our mental health, decision-making abilities, and sense of agency
  • AI interaction makes some people want to belong to a group while others feel increased loneliness.
  • Visionary transformative leadership is essential to reduce the “technostress” associated with AI adoption.
Sign up for the Big Think Business newsletter
Learn from the world’s biggest business thinkers

As companies around the world are being pushed to quickly implement AI solutions that bring convenience and speed to stay relevant, they often overlook the most important element of successful AI implementation — their employees. 

Emerging research shows that AI completely changes how we work, come up with ideas and interact with each other. And if the technology is implemented poorly, it carries potential problems — including to our mental health, decision-making abilities and sense of agency that is crucial for our overall well-being. 

If we want to stay relevant in the future of work, we need to reframe the question of ‘what can AI do’ and make sure that we’re mastering the skills that the technology can’t. We need to process AI before it processes us. 

Finding our connections

Nowadays, around 50% of employees already interact with AI daily. As some tasks are more efficiently handled by AI than humans, this completely alters the dynamics within organizations. When employees are less likely to reach out to colleagues for small questions, this cuts the “weak ties” that are essential for human wellbeing, but also for the sense of affiliation and belonging to an organization. 

Research shows that increased AI interaction makes people want to belong to a group and establish social contacts with others more, but also increases loneliness. This depends on the personal attachment style and the strategies that employees are using. Employees with adaptive coping behavior reach out more to human colleagues and build relationships with them as they start using AI. However, employees with maladaptive strategies (for example, experiencing attachment anxiety), end up feeling socially deprived and lonely as a result of increased AI usage, which potentially can affect their mental health. 

Try Big Think+ for your business
Engaging content on the skills that matter, taught by world-class experts.

When an organization implements AI, it is highly recommended that it encourages socialization among employees and offers alternative ways to do it. It is also important to consider individual attachment styles of employees when implementing AI and making sure those at risk (for example, experiencing attachment-related trauma) have access to psychological support.

Seeing the bigger picture

Creativity is often named as one of the key human qualities that should help people stay relevant and keep their jobs in the age of AI. In order to be creative, neuroscience says that a person needs an empty brain and time for it to wander. Most of us, however, don’t have the time and space to be creative, as we are constantly overloading our brains with different digital noise — notifications, emails, news, calls, etc. When our brains are overwhelmed and we are short of time, instead of using all brain networks for creativity, we tend to outsource to AI at least part of the creative process, for example, the idea generation stage. So the job that was supposed to stay with humans is increasingly done by the machines. 

To help employees maintain creative and innovative thinking, organizations may want to implement conditions where employees have space to think deeply and not to be constantly replying to messages and staying in meetings. 

They also need to educate them about the neuroscience of creativity to explain that you cannot remove one stage of the creative process from the human brain and get the same result. A creative mind is wired differently; it engages at the same time all three networks of the brain that typically don’t work together. The default mode network generates ideas and is involved in brainstorming new ideas. The salience network helps identify which ideas get passed along to the executive control network, which in turn, evaluates the ideas and keeps the focus or discards them. 

An idea may go through several iterations within your brain before it’s ready to be born, and whilst it might seem like using AI as a shortcut will speed the process up, it simply won’t deliver the same result.

AI and surveillance

We see more and more companies implementing AI to monitor employees, whether for productivity or for wellbeing purposes. Sophisticated algorithms predict the person’s mood or likelihood of leaving the company combining various parameters like speed and patterns of typing, choice of words in emails or chats, etc. While they always claim to be acting in the interests of employees, there are two risks associated with them. 

First, no algorithm is 100% accurate, and they might be biased towards people in specific circumstances (for example, experiencing mental health issues). When managers take the algorithmic suggestion at face value and make their managerial decisions only based on it, it creates a very dangerous precedent, as algorithms are “black boxes” and nobody, including their creators, really understands how the algorithm has arrived at one or another decision. Depending on the data and other parameters, their conclusions might be (partially) right or wrong, but there is always an error margin. 

Multiple studies show that the sense of agency — and being in control of one’s attention and time — is key to wellbeing.

Another big issue with implementing AI for employee surveillance is that when employees are being monitored and are aware of it, it lowers their sense of agency. Multiple studies show that the sense of agency — and being in control of one’s attention and time — is key to wellbeing. AI-assisted wellbeing tools risk becoming “one more thing” for employees to be concerned about at their work rather than helping them. 

Companies need to regularly audit algorithms that are being used for mistakes and biases, and take actions upon these findings. They also need to train managers on the topic of data bias and make sure that algorithmic conclusions are never the sole source of decision-making, but rather just a mere entry point of having a human conversation with an employee. 

Business leaders also need to clearly understand why they are using AI and recognize that these systems cannot be held morally responsible for hard decisions like firing someone. Instead, managers need to assume full responsibility over taking the decisions without the excuse that “AI said so.” It is important that the main control and agency stay with the individual, not with the algorithm.

Leading the Way

According to one study, our approach to leadership defines how much “technostress” people will get during digital transformation. While this study was not done specifically on AI implementation but rather looking at different digital transformation processes, its learnings can be valuable for organizations to look at what kind of leadership best suits AI implementation.

It found that laissez-faire leadership (when a leader is hands-off and delegates all decisions to people) puts people at most risk of technostress. Similarly, the transactional leadership style that uses rewards and punishments to motivate and direct followers also tends to generate technostress, although less compared to laissez-faire. 

Visionary transformative leadership — when a leader acts as a role model, inspiring positive changes and conveying a clear vision of group goals — impacts “technostress” the least.

Visionary transformative leadership — when a leader acts as a role model, inspiring positive changes and conveying a clear vision of group goals — impacts technostress the least. In other words, employees need a clear vision for how the systems should be used, implemented, etc. (do not confuse this guidance with the pressure to perform to rigid performance metrics, which is characteristic of the transactional leadership style). 

When companies are implementing AI in the workplace, make sure that the leader responsible for AI implementation either masters this approach or works closely with someone who does.

A final thought

While AI tools at work bring convenience and speed, their mindless implementation can come at a cost for our people. If we want to live harmoniously with AI, we need to avoid innovation for innovation’s sake. Consider both short-term and long-term consequences of implementing it, and address the risks proactively.

Sign up for the Big Think Business newsletter
Learn from the world’s biggest business thinkers

Unlock potential in your business

Learn how Big Think+ can empower your people.
Request a Demo

Related