According to a range of new surveys, somewhere between 29% to 44% of employees are admitting to finding some ways to sabotage the adoption of AI in their roles and companies as a response to their fears of being replaced.
“[There are] many forms of resistance. In some cases, employees said they have ignored guidelines, opted out of AI training, or flat-out refused to use AI tools. In more extreme situations, some admit to having fed sensitive company information to public, unapproved AI tools and even to tampering with performance metrics to make the tech seem less effective.”
There is a delicate balance that seems to be emerging here. While leaders and employers report that they are highly likely to require some level of AI proficiency among their workers (and let go those who lack it), the workers themselves worry that becoming proficient in the tools will also lead to eventual obsolescence for their current roles. As everyone struggles to find the balance, there seems to be a short-term crisis emerging that may result in long-term implications.
At a moment when the utility of AI for business tasks is very much being developed in real time, if this training happens on incomplete or intentionally flawed data or behaviors, those decisions risk derailing the future value of these AI tools themselves. Much like a customer database filled with garbage emails, this corruption of training data can compound over time to make AI impossible to rely on.
What do you think? Is this sabotage an emerging behavior that could lead to long term challenges – or are there effective ways that organizations are managing this issue?