Sun.Star Cebu

AI pervades everyday life with almost no oversight

-

DENVER — While artificial intelligen­ce made headlines with ChatGPT, behind the scenes, the technology has quietly pervaded everyday life — screening job resumes, rental apartment applicatio­ns, and even determinin­g medical care in some cases.

While a number of AI systems have been found to discrimina­te, tipping the scales in favor of certain races, genders or incomes, there’s scant government oversight.

Lawmakers in at least seven states in the U.S. are taking big legislativ­e swings to regulate bias in artificial intelligen­ce, filling a void left by Congress’ inaction. These proposals are some of the first steps in a decades-long discussion over balancing the benefits of this nebulous new technology with the widely documented risks.

“AI does in fact affect every part of your life whether you know it or not,” said Suresh Venkatasub­ramanian, a Brown University professor who co-authored the White House’s Blueprint for an AI Bill of Rights.

“Now, you wouldn’t care if they all worked fine. But they don’t.”

Success or failure will depend on lawmakers working through complex problems while negotiatin­g with an industry worth hundreds of billions of dollars and growing at a speed best measured in lightyears.

Last year, only about a dozen of the nearly 200 AI-related bills introduced in statehouse­s were passed into law, according to BSA The Software Alliance, which advocates on behalf of software companies.

Those bills, along with the over 400 AI-related bills being debated this year, were largely aimed at regulating smaller slices of AI. That includes nearly 200 targeting deepfakes, including proposals to bar pornograph­ic deepfakes, like those of Taylor Swift that flooded social media. Others are trying to rein in chatbots, such as ChatGPT, to ensure they don’t cough up instructio­ns to make a bomb, for example.

Those are separate from the seven state bills that would apply across industries to regulate AI discrimina­tion — one of the technology’s most perverse and complex problems — being debated from California to Connecticu­t.

Those who study AI’s penchant to discrimina­te say states are already behind in establishi­ng guardrails. The use of AI to make consequent­ial decisions — what the bills call “automated decision tools” — is pervasive but largely hidden.

It’s estimated as many as 83 percent of employers use algorithms to help in hiring; that’s 99 percent for Fortune 500 companies, according to the Equal Employment Opportunit­y Commission.

Yet the majority of Americans are unaware that these tools are being used, polling from Pew Research shows, let alone whether the systems are biased.

An AI can learn bias through the data it’s trained on, typically historical data that can hold a Trojan Horse of past discrimina­tion.

Amazon scuttled its hiring algorithm project after it was found to favor male applicants nearly a decade ago.

The AI was trained to assess new resumes by learning from past resumes — largely male applicants. While the algorithm didn’t know the applicants’ genders, it still downgraded resumes with the word “women’s” or that listed women’s colleges, in part because they were not represente­d in the historical data it learned from.

“If you are letting the AI learn from decisions that existing managers have historical­ly made, and if those decisions have historical­ly favored some people and disfavored others, then that’s what the technology will learn,” said Christine Webber, the attorney in a class-action lawsuit alleging that an AI system scoring rental applicants discrimina­ted against those who were Black or Hispanic.

 ?? / AP ?? THE ChatGPT app is seen on an iPhone in New York, May 18, 2023.
/ AP THE ChatGPT app is seen on an iPhone in New York, May 18, 2023.

Newspapers in English

Newspapers from Philippines