As technology quickly evolves, legislators have a lot of questions about AI
HARTFORD, Conn. — As state lawmakers rush to get a handle on fast-evolving artificial intelligence technology, they’re often focusing first on their own state governments before imposing restrictions on the private sector.
Legislators are seeking ways to protect constituents from discrimination and other harms while not hindering cutting-edge advancements in medicine, science, business, education and more.
“We’re starting with the government. We’re trying to set a good example,” Connecticut state Sen. James Maroney said during a floor debate in May.
Connecticut plans to inventory all of its government systems using artificial intelligence by the end of 2023, posting the information online. And starting next year, state officials must regularly review these systems to ensure they won’t lead to unlawful discrimination.
Maroney, a Democrat who has become a go-to AI authority in the General Assembly, said Connecticut lawmakers will likely focus on private industry next year. He plans to work this fall on model AI legislation with lawmakers in Colorado, New York, Virginia, Minnesota and elsewhere that includes “broad guardrails” and focuses on matters like product liability and requiring impact assessments of AI systems.
“It’s rapidly changing and there’s a rapid adoption of people using it. So we need to get ahead of this,” he said in a later interview. “We’re actually already behind it, but we can’t really wait too much longer to put in some form of accountability.”
Overall, at least 25 states, Puerto Rico and the District of Columbia introduced artificial intelligence bills this year. As of late July, 14 states and Puerto Rico had adopted resolutions or enacted legislation, according to the National Conference of State Legislatures. The list doesn’t include bills focused on specific AI technologies, such as facial recognition or autonomous cars, something NCSL is tracking separately.
Legislatures in Texas, North Dakota, West Virginia and Puerto Rico have created advisory bodies to study and monitor AI systems their respective state agencies are using, while Louisiana formed a new technology and cyber security committee to study AI’s impact on state operations, procurement and policy.
Other states took a similar approach last year.
Lawmakers want to know “Who’s using it? How are you using it? Just gathering that data to figure out what’s out there, who’s doing what,” said Heather Morton, a legislative analysist at NCSL who tracks artificial intelligence, cybersecurity, privacy and internet issues in state legislatures. “That is something that the states are trying to figure out within their own state borders.”
Connecticut’s new law, which requires AI systems used by state agencies to be regularly scrutinized for possible unlawful discrimination, comes after an investigation by the Media Freedom and Information Access Clinic at Yale Law School determined AI is already being used to assign students to magnet schools, set bail and distribute welfare benefits, among other tasks. However, details of the algorithms are mostly unknown to the public.
AI technology, the group said, “has spread throughout Connecticut’s government rapidly and largely unchecked, a development that’s not unique to this state.”
Richard Eppink, legal director of the American Civil Liberties Union of Idaho, testified before Congress in May about discovering, through a lawsuit, the “secret computerized algorithms” Idaho was using to assess people with developmental disabilities for federally funded health care services. The automated system, he said in written testimony, included corrupt data that relied on inputs the state hadn’t validated.
AI can be shorthand for many different technologies, ranging from algorithms recommending what to watch next on Netflix to generative AI systems such as ChatGPT that can aid in writing or create new images or other media. The surge of commercial investment in generative AI tools has generated public fascination and concerns about their ability to trick people and spread disinformation, among other dangers.
Some states haven’t attempted to tackle the issue yet. In Hawaii, state Sen. Chris Lee, a Democrat, said lawmakers didn’t pass any legislation this year governing AI “simply because I think at the time, we didn’t know what to do.”
Instead, the Hawaii House and Senate passed a resolution Lee proposed that urges Congress to adopt safety guidelines for the use of artificial intelligence and limit its application in the use of force by police and the military.