San Francisco Chronicle

Workplace: Artificial intelligen­ce taking over supervisor’s role

Artificial intelligen­ce tells crew how to add energy, empathy

- By Kevin Roose

When Conor Sprouls, a customer service representa­tive in the call center of insurance giant MetLife, talks to a customer over the phone, he keeps one eye on the bottomrigh­t corner of his screen. There, in a little blue box, AI tells him how he’s doing.

Talking too fast? The program flashes an icon of a speedomete­r, indicating that he should slow down.

Sound sleepy? The software displays an “energy cue,” with a picture of a coffee cup.

Not empathetic enough? A heart icon pops up.

For decades, people have fearfully imagined armies of hypereffic­ient robots invading offices and factories, gobbling up jobs once done by humans. But in all of the worry about the potential of artificial intelligen­ce to replace rankandfil­e workers, we may have overlooked the possibilit­y it will replace the bosses, too.

Sprouls and the other call center workers at his office in Warwick, R.I., still have plenty of human supervisor­s. But the software on their screens — made by Cogito, an AI company in Boston — has become a kind of adjunct manager, always watching them. At the end of every call, Sprouls’ Cogito notificati­ons are tallied and added to a statistics dashboard that his supervisor can view. If he hides the Cogito window by minimizing it, the program notifies his supervisor.

Cogito is one of several AI programs used in call centers and other workplaces. The goal, according to Joshua Feast, Cogito’s chief executive, is to make workers more effective by giving them realtime feedback.

“There is variabilit­y in human performanc­e,” Feast said. “We can infer from the way people are speaking with each other whether things are going well or not.”

The goal of automation has always been efficiency, but in this new kind of workplace, AI sees humanity itself as the thing to be optimized. Amazon uses complex algorithms to track worker productivi­ty in its fulfillmen­t centers, and can automatica­lly generate the paperwork to fire workers who don’t meet their targets, as The Verge uncovered this year. (Amazon has disputed that it fires workers without human input, saying that managers can intervene in the process.) IBM has used Watson, its AI program, during employee reviews to predict future performanc­e and claims it has a 96% accuracy rate.

Then there are the startups. Cogito, which works with large

insurance companies like MetLife and Humana as well as financial and retail firms, says it has 20,000 users. Percolata, a Palo Alto company that counts Uniqlo and 7Eleven among its clients, uses instore sensors to calculate a “true productivi­ty” score for each worker, and rank workers from most to least productive.

Management by algorithm is not a new concept. In the early 20th century, Frederick Winslow Taylor revolution­ized the manufactur­ing world with his “scientific management” theory, which tried to wring inefficien­cy out of factories by timing and measuring each aspect of a job. More recently, Uber, Lyft and other ondemand platforms have made billions of dollars by outsourcin­g convention­al tasks of human resources — scheduling, payroll, performanc­e reviews — to computers.

But using AI to manage workers in convention­al, 9to5 jobs has been more controvers­ial. Critics have accused companies of using algorithms for managerial tasks, saying that automated systems can dehumanize and unfairly punish employees. And while it’s clear why executives would want AI that can track their workers, it’s less clear why workers would.

“It is surreal to think that any company could fire their own workers without any human involvemen­t,” Marc Perrone, the president of United Food and Commercial Workers Internatio­nal Union, which represents food and retail workers, said in a statement about Amazon in April.

In the gig economy, management by algorithm has also been a source of tension between workers and the platforms that connect them with customers. This year, drivers for Postmates, DoorDash and other ondemand delivery companies protested a method of calculatin­g their pay, using an algorithm, that put customer tips toward guaranteed minimum wages — a practice that was nearly invisible to drivers, because of the way the service obscures the details of worker pay.

There were no protests at MetLife’s call center. Instead, the employees I spoke with seemed to view their Cogito software as a mild annoyance at worst. Several said they liked getting popup notificati­ons during their calls, although some said they had struggled to figure out how to get the “empathy” notificati­on to stop appearing. (Cogito says the AI analyzes subtle difference­s in tone between the worker and the caller and encourages the worker to try to mirror the customer’s mood.)

MetLife, which uses the software with 1,500 of its call center employees, says using the app has increased its customer satisfacti­on by 13%.

“It actually changes people’s behavior without them knowing about it,” said Christophe­r Smith, MetLife’s head of global operations. “It becomes a more human interactio­n.”

Still, there is a creepy scifi vibe to a situation in which AI surveils human workers and tells them how to relate to other humans. And it is reminiscen­t of the “workplace gamificati­on” trend that swept through corporate America a decade ago, when companies used psychologi­cal tricks borrowed from video games, like badges and leader boards, to try to spur workers to perform better.

Phil Libin, the chief executive of All Turtles, an AI startup studio in San Francisco, recoiled in horror when I told him about my call center visit.

“That is a dystopian hellscape,” Libin said. “Why would anyone want to build this world where you’re being judged by an opaque, blackbox computer?”

Defenders of workplace AI might argue that these systems are not meant to be overbearin­g. Instead, they’re meant to make workers better by reminding them to thank the customer, to empathize with the frustrated claimant on Line 1 or to avoid slacking off on the job.

The best argument for workplace AI may be situations in which human bias skews decisionma­king, such as hiring. Pymetrics, a New York startup, has made inroads in the corporate hiring world by replacing the traditiona­l résumé screening process with an AI program that uses a series of games to test for relevant skills. The algorithms are then analyzed to make sure they are not creating biased hiring outcomes, or favoring one group over another.

“We can tweak data and algorithms until we can remove the bias. We can’t do that with a human being,” said Frida Polli, Pymetrics’ chief executive.

Using AI to correct for human biases is a good thing. But as more AI enters the workplace, executives will have to resist the temptation to use it to tighten their grip on their workers and subject them to constant surveillan­ce and analysis. If that happens, it won’t be the robots staging an uprising.

 ?? Photos by Tony Luong / New York Times ?? Above: Conor Sprouls works as a customer service representa­tive at a MetLife call center in Warwick, R.I., where an artificial intelligen­ce program monitors employees. Below: Screen prompts are part of the job.
Photos by Tony Luong / New York Times Above: Conor Sprouls works as a customer service representa­tive at a MetLife call center in Warwick, R.I., where an artificial intelligen­ce program monitors employees. Below: Screen prompts are part of the job.
 ??  ??
 ?? Tony Luong / New York Times ?? Aaron Osei works at the MetLife call center in Warwick, R.I., where there are AI supervisor­s as well as human ones.
Tony Luong / New York Times Aaron Osei works at the MetLife call center in Warwick, R.I., where there are AI supervisor­s as well as human ones.

Newspapers in English

Newspapers from United States