The Boston Globe

As use of AI spreads, officials seek to set limits

Lawmakers in at least 7 states have eye on bias

- By Jesse Bedayn

DENVER — While artificial intelligen­ce made headlines with ChatGPT, behind the scenes, the technology has quietly pervaded everyday life — screening job resumes and rental apartment applicatio­ns, and even determinin­g medical care in some cases.

While a number of AI systems have been found to discrimina­te, tipping the scales in favor of certain races, genders, or incomes, there’s scant government oversight.

Lawmakers in at least seven states are taking big legislativ­e swings to regulate bias in artificial intelligen­ce, filling a void left by Congress’ inaction. These proposals are some of the first steps in a decades-long discussion over balancing the benefits of this nebulous new technology with the widely documented risks.

“AI does in fact affect every part of your life whether you know it or not,” said Suresh Venkatasub­ramanian, a Brown University professor who coauthored the White House’s Blueprint for an AI Bill of Rights.

“Now, you wouldn’t care if they all worked fine. But they don’t.”

Success or failure will depend on lawmakers working through complex problems while negotiatin­g with an industry worth hundreds of billions of dollars and growing at a speed best measured in lightyears.

Last year, only about a dozen of the nearly 200 AI-related bills introduced in state houses were passed into law, according to BSA The Software Alliance, which advocates on behalf of software companies.

Those bills, along with the over 400 AI-related bills being debated this year, were largely aimed at regulating smaller slices of AI. That includes nearly 200 targeting deepfakes, including proposals to bar pornograph­ic deepfakes, like those of Taylor Swift that flooded social media.

Others are trying to rein in chatbots, such as ChatGPT, to ensure they don’t cough up instructio­ns to make a bomb, for example.

Those are separate from the seven state bills that would apply across industries to regulate AI discrimina­tion — one of the technology's most perverse and complex problems — being debated from California to Connecticu­t.

Those who study AI’s penchant to discrimina­te say states are already behind in establishi­ng guardrails. The use of AI to make consequent­ial decisions — what the bills call “automated decision tools” — is pervasive but largely hidden.

It’s estimated as many as 83 percent of employers use algorithms to help in hiring, 99 percent for Fortune 500 companies, according to the Equal Employment Opportunit­y Commission.

Yet the majority of Americans are unaware that these tools are being used, polling from Pew Research shows, let alone whether the systems are biased. An AI can learn bias through the data it’s trained on, typically historical data that can hold remnants of past discrimina­tion.

Amazon scuttled its hiring algorithm project after it was found to favor male applicants. The AI was trained to assess new resumes by learning from past resumes — largely male applicants. While the algorithm didn’t know the applicants’ genders, it still downgraded resumes with the word “women’s” or that listed women’s colleges, in part because they were not represente­d in the historical data.

“If you are letting the AI learn from decisions that existing managers have historical­ly made, and if those decisions have historical­ly favored some people and disfavored others, then that’s what the technology will learn,” said Christine Webber, the attorney in a class-action lawsuit alleging that an AI system scoring rental applicants discrimina­ted against those who were Black or Hispanic.

Court documents describe one of the lawsuit’s plaintiffs, Mary Louis, a Black woman, who applied to rent an apartment in Massachuse­tts and received a cryptic response: “The third-party service we utilize to screen all prospectiv­e tenants has denied your tenancy.”

When Louis submitted two landlord references to show she’d paid rent early or on time for 16 years, court records say, she received another reply: “Unfortunat­ely, we do not accept appeals and cannot override the outcome of the Tenant Screening.”

That lack of transparen­cy and accountabi­lity is, in part, what the bills are targeting.

Under the bills, companies using these automated decision tools would have to do “impact assessment­s,” including descriptio­ns of how AI figures into a decision, the data collected, and an analysis of the risks of discrimina­tion, along with an explanatio­n of the company’s safeguards. Depending on the bill, those assessment­s would be submitted to the state or regulators could request them.

‘AI does in fact affect every part of your life whether you know it or not.’ SURESH VENKATASUB­RAMANIAN, Brown University

Newspapers in English

Newspapers from United States