San Antonio Express-News

Texas is studying the effects of state agencies using AI

- By Keaton Peters The Texas Tribune is a nonprofit, nonpartisa­n media organizati­on that informs Texans about public policy, politics, government and statewide issues.

When the Texas Workforce Commission became inundated with jobless claims in March 2020, it turned to artificial intelligen­ce.

Affectiona­tely named for the agency’s former head Larry Temple, who had died a year earlier, “Larry” the chatbot was designed to help Texans sign up for unemployme­nt benefits.

Like a next-generation FAQ page, Larry would field usergenera­ted questions about unemployme­nt cases. Using AI language processing, the bot would determine which answer prewritten by human staff would best fit the user’s unique phrasing of the question. The chatbot answered more than 21 million questions before being replaced by Larry 2.0 last March.

Larry is one example of the ways artificial intelligen­ce has been used by state agencies. Adaptation of the technology in state government has grown in recent years. But that accelerati­on also has sparked fears of unintended consequenc­es like bias, loss of privacy or loss of control of the technology. This year, the Legislatur­e committed to taking a more active role in monitoring how the state is using AI.

“This is going to totally revolution­ize the way we do government,” said state Rep. Giovanni Capriglion­e, R-southlake, who wrote a bill aimed at helping the state make better use of AI technology.

In June, Gov. Greg Abbott signed that bill, House Bill 2060, into law, creating an AI advisory council to study and take inventory of the ways state agencies currently utilize AI and assess whether the state needs a code of ethics for AI. The council’s role in monitoring what the state is doing with AI does not involve writing final policy.

Artificial intelligen­ce describes a class of technology that emulates and builds upon human reasoning through computer systems. The chatbot uses language processing to understand users’ questions and match it to predetermi­ned answers. New tools such as CHATGPT are categorize­d as generative AI because the technology generates a unique answer based on a user prompt. AI is also capable of analyzing large data sets and using that informatio­n to automate tasks previously performed by humans. Automated decisionma­king is at the center of HB 2060.

More than one-third of Texas state agencies already are using some form of artificial intelligen­ce, according to a 2022 report from the Texas Department of Informatio­n Resources. The workforce commission also has an AI tool for job seekers that provides customized recommenda­tions of job openings. Various agencies are using AI for translatin­g languages into English and for call center tools such as speech-to-text. AI also is used to enhance cybersecur­ity and fraud detection.

Automation also is used for time-consuming work in order to “increase work output and efficiency,” according to a statement from the Department of Informatio­n Resources. One example of this could be tracking budget expenses and invoices.

Few requiremen­ts

In 2020, DIR launched an AI Center for Excellence aimed at helping state agencies implement more AI technology. Participat­ion in DIR’S center is voluntary, and each agency typically has its own technology team, so the extent of automation and AI deployment at state agencies is not closely tracked.

Right now, Texas state agencies have to verify that the technology they use meets safety requiremen­ts set by state law, but there are no specific disclosure requiremen­ts on the types of technology or how they are used. HB 2060 will require each agency to provide that informatio­n to the AI advisory council by July 2024.

“We want agencies to be creative,” Capriglion­e said. He favors finding more use cases for AI that go well beyond chatbots, but recognizes there are concerns around poor data quality stopping the system from working as intended: “We’re going to have to set some rules.”

As adoption of AI has grown, so have worries around the ethics and functional­ity of the technology. The AI advisory council is the first step toward oversight of how the technology is being deployed. The seven-member council will include a member of the state House and the Senate, an executive director and four individual­s appointed by the governor with expertise in AI, ethics, law enforcemen­t and constituti­onal law.

Samantha Shorey is an assistant professor at the University of Texas at Austin who has studied the social implicatio­ns of artificial intelligen­ce, particular­ly the kind designed for increased automation. She is concerned that if technology is empowered to make more decisions, it will replicate and exacerbate social inequality: “It might move us towards the end goal more quickly. But is it moving us towards an end goal that we want?”

Proponents of using more AI view automation as a way to make government work more efficientl­y. Harnessing the latest technology could help speed up case management for social services, provide immediate summaries of lengthy policy analysis or streamline the hiring and training process for new government employees.

However, Shorey is cautious about the possibilit­y of artificial intelligen­ce being brought into decision-making processes such as determinin­g who qualifies for social service benefits, or how long someone should be on parole. Earlier this year, the U.S. Justice Department began investigat­ing allegation­s that a Pennsylvan­ia county’s AI model intended to help improve child welfare was discrimina­ting against parents with disabiliti­es and resulting in their children being taken away.

Potential concerns

AI systems “tend to absorb whatever biases there are in the past data,” said Suresh Venkatasub­ramanian, director of the Center for Technology Responsibi­lity at Brown University. Artificial intelligen­ce that is trained on data that includes any kind of gender, religious, race or other bias is at risk of learning to discrimina­te.

In addition to the problem of flawed data reproducin­g social inequality, there are also privacy concerns around the technology’s dependence on collecting large amounts of data.

What the AI could be doing with that data over time also is driving fears that humans will lose some control over the technology.

“As AI gets more and more complicate­d, it’s very hard to understand how these systems are working, and why they’re making decisions the way they do,” Venkatasub­ramanian said.

That fear is shared by Jason Green-lowe, executive director at the Center for AI Policy, a group that has lobbied for stricter AI safety in Washington, D.C.

With the accelerati­ng pace of technology and a lack of regulatory oversight, Green-lowe said, “soon we might find ourselves in a world where AI is mostly steering. … And the world starts to reorient itself to serve the AI’S interests rather than human interest.”

Some technical experts, however, are more confident that humans will remain in the driver’s seat of increasing AI deployment.

Alex Dimakis, a professor of electrical engineerin­g and computer science at the University of Texas at Austin, worked on the artificial intelligen­ce commission for the U.S. Chamber of Commerce.

In Dimakis’ view, AI systems should be transparen­t and subject to independen­t evaluation known as red teaming, a process in which the underlying data and decision-making process of the technology are scrutinize­d by multiple experts to determine if more robust safety measures are necessary.

“You cannot hide behind AI,” Dimakis said. Beyond transparen­cy and evaluation, Dimakis said the state should enforce existing laws against whoever created the AI in any case where the technology produces an outcome that violates the law: “apply the existing laws without being confused that an AI system is in the middle.”

The AI advisory council will submit its findings and recommenda­tions to the Legislatur­e by December.

In the meantime, interest is growing in deploying AI at all levels of government.

DIR operates an artificial intelligen­ce user group made up of representa­tives from state agencies, higher education and local government interested in implementi­ng AI.

Interest in the user group is growing by the day, according to a DIR spokespers­on. The group has more than 300 members representi­ng more than 85 different entities.

 ?? Brett Coomer/staff file photo ?? The Texas Workforce Commission website added a virtual assistant chat feature. “Larry” the chatbot would select prewritten answers that best matched user-submitted questions.
Brett Coomer/staff file photo The Texas Workforce Commission website added a virtual assistant chat feature. “Larry” the chatbot would select prewritten answers that best matched user-submitted questions.

Newspapers in English

Newspapers from United States