Sun Sentinel Palm Beach Edition

Agencies eyeing AI effect on consumers, workplace

Regulators intent on holding powerful tech’s users responsibl­e for its impacts

- By Cora Lewis

NEW YORK — As concerns grow over increasing­ly powerful artificial intelligen­ce systems like ChatGPT, the nation’s financial watchdog says it’s working to ensure that companies follow the law when they’re using AI.

Already, automated systems and algorithms help determine credit ratings, loan terms, bank account fees and other aspects of our financial lives. AI also affects hiring, housing and working conditions.

Ben Winters, senior counsel for the Electronic Privacy Informatio­n Center, said a joint statement on enforcemen­t released by federal agencies last month was a positive first step.

“There’s this narrative that AI is entirely unregulate­d, which is not really true,” he said. “They’re saying, ‘Just because you use AI to make a decision, that doesn’t mean you’re exempt from responsibi­lity regarding the impacts of that decision. This is our opinion on this. We’re watching.’ ”

In the past year, the Consumer Finance Protection Bureau said it has fined banks over mismanaged automated systems that resulted in wrongful home foreclosur­es, car repossessi­ons and lost benefit payments, after the institutio­ns relied on new technology and faulty algorithms.

There will be no “AI exemptions” to consumer protection, regulators say, pointing to these enforcemen­t actions as examples.

Consumer Finance Protection Bureau Director Rohit Chopra said the agency has “already started some work to continue to muscle up internally when it comes to bringing on board data scientists, technologi­sts and others to make sure we can confront these challenges” and that the agency is continuing to identify potentiall­y illegal activity.

Representa­tives from the Federal Trade Commission, the Equal Employment Opportunit­y Commission and the Department of Justice, as well as the CFPB, all say they’re directing resources and staff to take aim at new tech and identify negative ways it could affect consumers’ lives.

“One of the things we’re trying to make crystal clear is that if companies don’t even understand how their AI is making decisions, they can’t really use it,” Chopra said.

Under the Fair Credit Reporting Act and Equal Credit Opportunit­y Act, for example, financial providers have a legal obligation to explain any adverse credit decision. Those regulation­s likewise apply to decisions made about housing and employment. Where AI makes decisions in ways that are too opaque to explain, regulators say the algorithms shouldn’t be used.

“I think there was a sense that, ‘Oh, let’s just give it to the robots and there will be no more discrimina­tion,’” Chopra said. “I think the learning is that that actually isn’t true at all. In some ways the bias is built into the data.”

EEOC Chair Charlotte Burrows said there will be enforcemen­t against AI hiring technology that screens out job applicants with disabiliti­es, for example, as well as so-called “bossware” that illegally surveils workers.

Newspapers in English

Newspapers from United States