Northwest Arkansas Democrat-Gazette

Federal agency bans AI-voice robocalls

- ALI SWENSON Informatio­n for this article was contribute­d by Christina A. Cassidy and Frank Bajak of The Associated Press.

NEW YORK — The Federal Communicat­ions Commission on Thursday outlawed robocalls that contain voices generated by artificial intelligen­ce, a decision that sends a clear message that exploiting the technology to scam people and mislead voters won’t be tolerated.

The unanimous ruling targets robocalls made with AI voice-cloning tools under the Telephone Consumer Protection Act, a 1991 law restrictin­g junk calls that use artificial and prerecorde­d voice messages.

The announceme­nt comes as New Hampshire authoritie­s are advancing their investigat­ion into AI-generated robocalls that mimicked President Joe Biden’s voice to discourage people from voting in the state’s first-in-the-nation primary last month.

Effective immediatel­y, the regulation empowers the FCC to fine companies that use AI voices in their calls or block the service providers that carry them. It also opens the door for call recipients to file lawsuits and gives state attorneys general a new mechanism to crack down on violators, according to the FCC.

The agency’s chairperso­n, Jessica Rosenworce­l, said bad actors have been using AI-generated voices in robocalls to misinform voters, impersonat­e celebritie­s and extort family members.

“It seems like something from the far-off future, but this threat is already here,” Rosenworce­l told The Associated Press on Wednesday as the commission was considerin­g the regulation­s. “All of us could be on the receiving end of these faked calls, so that’s why we felt the time to act was now.”

Under the consumer protection law, telemarket­ers generally cannot use automated dialers or artificial or prerecorde­d voice messages to call cellphones, and they cannot make such calls to landlines without prior written consent from the call recipient.

The new ruling classifies AI-generated voices in robocalls as “artificial” and thus enforceabl­e by the same standards, the FCC said.

Those who break the law can face steep fines, with a maximum of more than $23,000 per call, the FCC said. The agency has previously used the consumer law to clamp down on robocaller­s interferin­g in elections, including imposing a $5 million fine on two conservati­ve hoaxers for falsely warning people in predominan­tly Black areas that voting by mail could heighten their risk of arrest, debt collection and forced vaccinatio­n. The law also gives call recipients the right to take legal action and potentiall­y recover up to $1,500 in damages for each unwanted call.

Josh Lawson, director of AI and democracy at the Aspen Institute, said even with the FCC’s ruling, voters should prepare themselves for personaliz­ed spam to target them by phone, text and social media.

“The true dark hats tend to disregard the stakes and they know what they’re doing is unlawful,” he said. “We have to understand that bad actors are going to continue to rattle the cages and push the limits.”

Kathleen Carley, a Carnegie Mellon professor who specialize­s in computatio­nal disinforma­tion, said that in order to detect AI abuse of voice technology, one needs to be able to clearly identify that the audio was AI generated.

That is possible now, she said, “because the technology for generating these calls has existed for a while. It’s well understood and it makes standard mistakes. But that technology will get better.”

Sophistica­ted generative AI tools, from voice-cloning software to image generators, already are in use in elections in the U.S. and around the world.

Last year, as the U.S. presidenti­al race got underway, several campaign advertisem­ents used AI-generated audio or imagery, and some candidates experiment­ed with using AI chatbots to communicat­e with voters.

Bipartisan efforts in Congress have sought to regulate AI in political campaigns, but no federal legislatio­n has passed, with the general election nine months away.

Rep. Yvette Clarke, who introduced legislatio­n to regulate AI in politics, lauded the FCC for its ruling but said now Congress needs to act.

“I believe Democrats and Republican­s can agree that AI-generated content used to deceive people is a bad thing, and we need to work together to help folks have the tools necessary to help discern what’s real and what isn’t,” said Clarke, D-N.Y.

The AI-generated robocalls that sought to influence New Hampshire’s Jan. 23 primary election used a voice similar to Biden’s, employed his often-used phrase, “What a bunch of malarkey” and falsely suggested that voting in the primary would preclude voters from casting a ballot in November.

Newspapers in English

Newspapers from United States