White House proposes rules to guide AI regulations
The Trump administration is proposing new rules to guide future federal regulation of artificial intelligence used in medicine, transportation and other industries.
But the vagueness of the principles announced by the White House is unlikely to satisfy AI watchdogs who have warned of a lack of accountability as computer systems are deployed to take on human roles in high-risk social settings, such as mortgage lending or job recruitment.
The White House said that in deciding regulatory action, U.S. agencies “must consider fairness, nondiscrimination, openness, transparency, safety and security.”
But federal agencies must also avoid setting up restrictions that “needlessly hamper AI innovation and growth,” reads a memo being sent to U.S. agency chiefs from Russell Vought, acting director of the Office of Management and Budget.
“Agencies must avoid a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits,” the memo says.
The rules won’t affect how federal agencies such as law enforcement use facial recognition and other forms of AI. They are specifically limited to how agencies devise new AI regulations for the private sector. There’s a 60-day public comment period before the rules take effect.
“These principles are intentionally high-level,” said Lynne Parker, U.S. deputy chief technology officer at the White House’s Office of Science and Technology Policy. “We purposely wanted to avoid top-down, one-size-fits-all, blanket regulations.”
The White House said the proposals unveiled Tuesday are meant to promote private sector applications of AI that are safe and fair, while also pushing back against stricter regulations favored by some lawmakers and activists.
Federal agencies such as the Food and Drug Administration and the Federal Aviation Administration will be bound to follow the new AI principles. That makes the rules “the first of their kind from any government,” Michael Kratsios, the U.S. chief technology
Shortly after its debut, opponents of genetically modified food products (GMOs) began asserting it was unsafe to eat. In July 2018, the Food and Drug Administration cleared its main ingredient, heme, as safe, but anti-biotechnology activists have continued to raise questions.
In November, a vegan filed a class action lawsuit claiming Burger King failed to disclose it cooked Impossible officer, said in a call with reporters Monday.
Rapid advancements in AI technology have raised fresh concern as computers increasingly take on jobs such as diagnosing medical conditions, driving cars, recommending stock investments, judging credit risk and recognizing individual faces in video footage. It’s often not clear how AI systems make their decisions, leading to questions of how far to trust them and when to keep humans in the loop.
Terah Lyons of the nonprofit Partnership on AI, which advocates for responsible AI and has backing from major tech firms and philanthropies, said the White House principles won’t likely have sweeping or immediate effects. But she said she was encouraged that they detailed a U.S. approach centered on values such as trustworthiness and fairness.
Whoppers on the same grills as its beef products. Burger King has not yet filed a response to the suit.
And in December, conservative news outlets published claims by a livestock industry advocate that eating four Impossible Burgers a day would give men enough estrogen to grow breasts. Researchers countered that the claims are unsubstantiated.