Regulation: Who’s responsible when AI goes awry?
Powerful new chatbots and other recent AI technologies are like “a runaway train that we’re chasing on foot,” said computer scientist Cynthia Rudin in The Hill. Billions of dollars are “pouring into AI technology that generates realistic images and text, with essentially no good controls on who generates what.” The tech companies creating them “don’t appear to care about how their products impact—or even wreck—our society.” Venture capitalists happily compare the emergence of AI to the launch of the internet; it could just as easily become “like launching a nuclear bomb on the truth.” Yet few on Capitol Hill seem in any rush to act. Rep. Ted Lieu (D-Calif.) had the right idea when he proposed that a dedicated government agency needs to regulate AI. We need to demand strict government guardrails before we get “a dangerous avalanche of misinformation.”
“My own experience using Microsoft’s ChatGPT puts misinformation at between 5 to 10 percent,” said Parmy Olson in Bloomberg. That’s conservative, and it matches others’ experience. CBS’s 60 Minutes asked ChatGPT where correspondent Lesley Stahl—a 20-year veteran of the network—worked. The answer: NBC. Commercial pressures are pushing Big Tech to take this technology to a wider audience well before it’s ready. “The problem is that most lawmakers do not even know what AI is,” said Cecilia Kang and Adam Satariano in The New York Times. Washington has yet to produce a single bill “to thwart the development” of AI’s “dangerous aspects,” while previous efforts “to curb AI applications like facial recognition” have “withered.” We are seeing a pattern here, with Washington “taking a handsoff stance” even as technology “outstrips U.S. rule-making and regulation.”
The one thing that may get tech companies to move cautiously is liability, said Will Oremus and Cristiano Lima in The Washington Post. Tech platforms are largely spared liability for content posted on their sites by the set of communications rules known as Section 230. Supreme Court Justice Neil Gorsuch has already posited that the “protections that shield social networks” wouldn’t apply to a “polemic” generated by AI. That could expose tech companies like Google or Microsoft to lawsuits for false or libelous results. If we do nothing to change current laws, then critics of tech “will get what they have long desired: cracks in the shield,” said Matt Perault in Lawfare. What we need is legislation that can balance technological progress with social protection, and won’t “cripple large language models with lawsuits and court fees.” The reality, unfortunately, is that gridlock makes it likely that instead Congress will “stand still” and leave the lead on making rules for AI to the courts.