AI at Work: Friend or Foe? Understanding the Impact of AI Surveillance on Employee Performance

July 10, 2024 thehrobserver-hrobserver-roboticteacher

Organisations using AI to monitor employees’ behaviour and productivity can expect increased complaints, reduced productivity, and higher turnover—unless the technology is perceived as supporting employee development, according to recent research by Cornell University.

The researchers found that participants who believed they were being monitored by AI generated fewer ideas, indicating a decline in performance. These findings were part of the study “Algorithmic Versus Human Surveillance Leads to Lower Perceptions of Autonomy and Increased Resistance,” published in June in Nature Research’s Communications Psychology.

“When artificial intelligence and other advanced technologies are implemented for developmental purposes, people appreciate that they can learn from it and improve their performance,” said Emily Zitek, associate professor of organisational behaviour at the ILR School.

“The problem occurs when they feel like an evaluation is happening automatically, straight from the data, without any ability to contextualise it.”

In four experiments involving nearly 1,200 participants, researchers Rachel Schlund and Zitek investigated whether it matters if people or AI conduct surveillance and whether the context—evaluating performance or supporting development—influences perceptions.

 In the first study, participants were asked to recall and write about times when they were monitored and evaluated by either type of surveillance.

Participants reported feeling less autonomy under AI and were more likely to engage in “resistance behaviours.”

Next, simulating real-world surveillance, a pair of studies had participants work as a group to brainstorm ideas for a theme park and then individually generate ideas about one segment of the park. They were told their work would be monitored by a research assistant or AI, represented in Zoom video conferences as “AI Technology Feed.” 

After several minutes, either the human assistant or the “AI” relayed messages that the participants weren’t generating enough ideas and should try harder. In post-study surveys, more than 30% of participants criticised AI surveillance, compared to about 7% who were critical of human monitoring.

“The reinforcement from the AI made the situation just more stressful and less creative,” one participant wrote.

In a fourth study, participants were asked to imagine they worked in a call centre and were told that either humans or AI would analyse a sample of their calls. For some participants, the analysis would be used to evaluate their performance; for others, it would provide developmental feedback. In the developmental scenario, participants did not perceive algorithmic surveillance as infringing more on their autonomy and did not report a greater intention to quit.

The results suggest an opportunity for organisations to implement algorithmic surveillance in ways that could build trust rather than inspire resistance.

“Organisations trying to implement this kind of surveillance need to recognise the pros and cons,” Zitek said. “They should do what they can to make it more developmental or ensure that people can add contextualisation. If people feel like they don’t have autonomy, they’re not going to be happy.”

Author
Editor

The HR Observer

Related Posts