Business Day

Big Brother is watching — unless you have a lawyer by your side

Human oversight will be a critical focus as we move deeper into the age of artificial intelligen­ce

- Lusanda Raphulu ● Raphulu is a partner in the Bowmans employment and benefits practice.

Walking into the distributi­on warehouse of a major SA retailer, visitors are greeted by a buzz of voices saying things like: “Yes, yes, okay” , and “Repeat, repeat”. The people appear to be talking to themselves but they are not. Wearing headsets and wristband devices, they are responding to instructio­ns from a “voice-picking” system that tells them what items, where and how many to collect from the various shelves.

This kind of technology has been used in SA since about 2015 and is credited with significan­t efficiency improvemen­ts and reduced errors in the fastmoving consumer goods environmen­t, where its hands-free, eyes-free features make it easier and quicker for human operators to locate selected products.

Wearable devices can also monitor the individual’s productivi­ty and time management to the second. They can instantly detect that a particular worker is not meeting the product-picking targets set for the day and flag this for a performanc­e discussion. Such devices bring the concept of clocking in and clocking out and monitoring what happens in between to new heights of precision.

This might have strong appeal for employers, but what about employees? What if they feel uncomforta­ble about being constantly monitored and consider this an invasion of privacy? Can the employer insist on the use of such wearables as a condition of employment?

Such questions are increasing­ly coming up in the evolving world of work, where “people analytics” are more widely used than many of us realise. Some automation industry experts estimate 70% of large organisati­ons worldwide already have some form of people analytics to analyse what employees and customers are saying, doing and even feeling.

In the field of “sentiment analysis” there is a tool that can read people’s e-mails and report on the mood they are in. Another tool analyses voices to determine how trustworth­y a person is. Then there is a tool that allows companies to monitor their employees’ internal networks by keeping tabs on their day-today contacts, so they can restructur­e networks that are inefficien­t or unproducti­ve.

The legal position of employers and employees in workplaces where people analytics are used is something we are all going to have to watch closely as usage becomes more pervasive and perhaps more personal.

One of the key points employers should keep in mind is that the use of such tools must be fair: there must be mechanisms in place for employees to understand any decision made as a result of people analytics, as well as a means to challenge such decisions. Most importantl­y, employees must be able to appeal on a human level to real people.

Human oversight of machines is going to be a critical focus as we move ever deeper into the age of artificial intelligen­ce (AI) and robotics.

Employers will have to think carefully about accountabi­lity in the event of a machine or AI program malfunctio­ning and causing some kind of damage or harm. Someone has to manage the machines, and if a human being is responsibl­e for the malfunctio­ning of software or hardware that caused the problem, then misconduct or poor performanc­e would almost certainly come into play.

As the law stands, employers are vicariousl­y liable for the wrongful acts of their employees or agents if these acts occur during the course of employment. In the future, employers could also find themselves being held liable for the wrongful acts of their autonomous robots.

In the meantime, company management­s should be giving some serious thought to upskilling their human resources practition­ers, who will have to be more astute than ever in anticipati­ng and managing the effects of technologi­cal changes on the workforce.

Wearable devices, for instance, come with a host of employee-related implicatio­ns, especially for retraining, reskilling and health and safety. With robotic wearables becoming more prevalent, the chances of being injured by a robot rise, making occupation­al health and safety ever more important.

Yet another aspect for employers to consider is that AI can be discrimina­tory if not properly programmed. For example, in a job selection process, algorithmi­c analysis could use otherwise objective criteria to result in outcomes that are biased. In SA, where the Employment Equity Act prohibits unfair discrimina­tion, such algorithms would need to be carefully configured to ensure compliance.

Then there is the possibilit­y of pushback from organised labour. Trade unions are likely to view the increasing use of robotics as a threat to human job security and, perhaps, in the case of tools for people analytics, to employee privacy too.

A lot of work and buy-in from organised labour is sure to be needed to show anticipate­d positive gains from the use of AI and robotics, including robotic wearables. Factors such as increases in the number of jobs, increases in wages and better working hours and conditions will be important in these conversati­ons.

Given the pace at which robotics and AI are entering the world of work, these conversati­ons should already be happening in earnest. Technology will not wait for the human element to catch up.

IN THE EVOLVING WORLD OF WORK ‘PEOPLE ANALYTICS’ ARE MORE WIDELY USED THAN MANY OF US REALISE

MANAGEMENT­S SHOULD BE GIVING SERIOUS THOUGHT TO UPSKILLING THEIR HUMAN RESOURCES PRACTITION­ERS

 ?? /Reuters ?? Keep calm and carry on: Employees sort packages at the Amazon distributi­on centre warehouse in Saran, near Orleans, France. Monitoring and analysis of workers’ actions are growing.
/Reuters Keep calm and carry on: Employees sort packages at the Amazon distributi­on centre warehouse in Saran, near Orleans, France. Monitoring and analysis of workers’ actions are growing.

Newspapers in English

Newspapers from South Africa