The Week (US)

Regulation: Who’s responsibl­e when AI goes awry?

-

Powerful new chatbots and other recent AI technologi­es are like “a runaway train that we’re chasing on foot,” said computer scientist Cynthia Rudin in The Hill. Billions of dollars are “pouring into AI technology that generates realistic images and text, with essentiall­y no good controls on who generates what.” The tech companies creating them “don’t appear to care about how their products impact—or even wreck—our society.” Venture capitalist­s happily compare the emergence of AI to the launch of the internet; it could just as easily become “like launching a nuclear bomb on the truth.” Yet few on Capitol Hill seem in any rush to act. Rep. Ted Lieu (D-Calif.) had the right idea when he proposed that a dedicated government agency needs to regulate AI. We need to demand strict government guardrails before we get “a dangerous avalanche of misinforma­tion.”

“My own experience using Microsoft’s ChatGPT puts misinforma­tion at between 5 to 10 percent,” said Parmy Olson in Bloomberg. That’s conservati­ve, and it matches others’ experience. CBS’s 60 Minutes asked ChatGPT where correspond­ent Lesley Stahl—a 20-year veteran of the network—worked. The answer: NBC. Commercial pressures are pushing Big Tech to take this technology to a wider audience well before it’s ready. “The problem is that most lawmakers do not even know what AI is,” said Cecilia Kang and Adam Satariano in The New York Times. Washington has yet to produce a single bill “to thwart the developmen­t” of AI’s “dangerous aspects,” while previous efforts “to curb AI applicatio­ns like facial recognitio­n” have “withered.” We are seeing a pattern here, with Washington “taking a handsoff stance” even as technology “outstrips U.S. rule-making and regulation.”

The one thing that may get tech companies to move cautiously is liability, said Will Oremus and Cristiano Lima in The Washington Post. Tech platforms are largely spared liability for content posted on their sites by the set of communicat­ions rules known as Section 230. Supreme Court Justice Neil Gorsuch has already posited that the “protection­s that shield social networks” wouldn’t apply to a “polemic” generated by AI. That could expose tech companies like Google or Microsoft to lawsuits for false or libelous results. If we do nothing to change current laws, then critics of tech “will get what they have long desired: cracks in the shield,” said Matt Perault in Lawfare. What we need is legislatio­n that can balance technologi­cal progress with social protection, and won’t “cripple large language models with lawsuits and court fees.” The reality, unfortunat­ely, is that gridlock makes it likely that instead Congress will “stand still” and leave the lead on making rules for AI to the courts.

 ?? ?? AI can be thoroughly convincing yet 100% wrong.
AI can be thoroughly convincing yet 100% wrong.

Newspapers in English

Newspapers from United States