AI software monitoring employees at home surges in popularity, amidst pandemic

The working-from-home revolution that has been inflicted upon many of us is one which raises queries about how artificial intelligence could be used to monitor employees at home.

This concept is nothing new; early forms of surveillance have seen apps signal managers when workers have been away from their desk for prolonged periods of time. With the current global pandemic now forcing people away from an office space, many of which would never have been offered such flexibility, tech companies are rushing to help businesses keep an eye on their workforce.

In 2019, 50% of large corporations surveyed were using non-traditional methods to monitor their employees; analysing email text and gathering biometric data. Sales of monitoring software is booming, however, tech buyers should be wary of how legitimate many of these ventures are, with smaller, newer companies jumping upon the bandwagon.

Although many are credible tools, the majority prioritise productivity over worker’s rights and privacy. It’s clear that the worry of employers in the current climate is driving the trend, yet some see it as a breakthrough of “leadership science and artificial intelligence”.

Regardless of moral ambiguity, market competitors like InterGuard, Hubstaff, Time Doctor, Teramind and Sneek have all reported exponential growth in user licenses and sign ups.

AI’s now prominent role in employee monitoring puts the industry front-and-centre of the AI-ethics debate. Monitoring employees is legal in many places, yet the shift in societal behaviour and the invasive nature of many softwares could see changes in legislation. Prospective customers of the industry could perceive this as a risk to the burgeoning success that AI -integrated software has seen this year.

The home was always supposed to be private, and now it’s our workplace as well. The delicate balance of management and surveillance is currently being toppled by many companies looking to exert power over staff, and investments could become less valuable if laws were enacted to combat this.

The term “employee monitoring software” may seem inherently dystopian, and its infancy has seen certain rights and privacy concerns be flouted. But with competing software already being heavily discussed in all the big media outlets, it’s clear those looking to buy into AI should look this way, the new way to manage people.

The brand dangers of buying the wrong AI product

Companies looking to position in the AI market will quickly need to satisfy ethical questions for buyers – even though the demand curve looks steep. The impact of buying the ‘wrong AI’ could have huge impact on brand equity.

The AI market is on track to grow to $100bn by 2025 – six-fold from 2019’s figure of $16.4bn.

Research from analyst Omdia  –  Artificial Intelligence Software Market Forecasts – suggests while the pandemic forced some sectors to slow down their AI efforts, the market is still set for stellar figures as many companies have been forced to accelerate their AI adoption.

“Economic effects from the COVID-19 pandemic have widened the dichotomy between early AI adopters—the ‘AI haves’—and the trailing followers—the ‘AI have nots,’” said Omdia senior analyst, Neil Dunay.

“Industries that have pioneered AI deployments and have the largest AI investments are likely to continue to invest in what they view as proven, indispensable technology for cost cutting, revenue generation, and enhancing customer experience.”

But there are challenges to ensure the AI industry grows safely – not only through its makers but also its buyer. As the industry develops, more questions circle around the ethics and biased foundations of some AI technologies.

For example, just this week, The Bennett Institute for Public Policy at the University of Cambridge issued a report that listed the themes technology buyers need to consider as they reflect the concerns of the general public with AI.

Blog from Bennett Institute’s Affiliate Researcher Sam Gilbert: Big Tech and Data Ethics

They include:

  • Privacy and surveillance
  • Bias, discrimination, and injustice in algorithmic decisioning
  • Encoding of ethical assumptions in autonomous vehicle systems
  • Artificial general intelligence as an existential risk to humanity
  • Software user interface design as an impediment to human flourishing
  • Job displacement from machine-learning and robotics
  • Monetary compensation for personal data use

Report co-author Sam Gilbert notes:

“By giving ethics board a formal role in governance structures, and giving individuals transparency and control over how their personal data is collected, stored, and used, tech companies can begin to transcend “ethics washing”.

In a similar tone, researchers from Google’s DeepMind and the University of Oxford have also proposed rebuilding the AI industry on a basis of anti-colonialism – to avoid algorithmic exploitation in a paper released Thursday.

These are just two questions posed by major institutions within a week, but many more are likely to follow to ensure AI companies don’t ignore deep philosophical, political and social questions.

Why Microsoft, IBM and Amazon disavowed ‘biased’ facial recognition only now?

Microsoft, IBM and Amazon have walked away from facial recognition tech in law enforcement.

Neither company owns a significant market share, but the reputational risk for continuing with the technology has littered each brand with much bigger problems of protest groups, privacy ethics and years of working through the courts.

Massachusetts Institute of Technology study in 2019 found that none of the facial recognition tools made by the companies were 100 percent accurate in recognising men and women with dark skin.  A study from the US National Institute of Standards and Technology suggested facial recognition algorithms were far less accurate at identifying African-American and Asian faces compared with Caucasian ones.

A long time knowing?

Each company had been informed some time ago the technology was failing to deliver equal racial assessments for some time. But combine this with a sudden global movement to prevent racial bias in society, and hey presto, each company denounces it.

The big U-turn

Regardless, the move marks a line in the sand for each company over where the privacy moral compass now lies – and this will to the relief of many staff who carry the brand flag home with their pay-packets.

Even though each company announced their decisions at a time when people worldwide are protesting about inequality racial bias, the decision to distance from ‘biased AI tech’ is bound to impact other product lines that also have privacy dilemmas.

The question has now been asked – how much bias is to be allowed? If this is biased, what else is?

There is no going back.

The writing has been on the wall on facial recognition for years. But saying that, it is better we have influential companies making these moves now than never – even if they are only reactive.

Your Story is the Strategy

Contact Us