Qatar Tribune

AI is moving fast; AI regulation needs to catch up

- KISLAYA PRASAD (Kislaya Prasad is a research professor at the Robert H. Smith School of Business and academic director of its Center for Global Business.)

ARTIFICIAL intelligen­ce is already affecting our lives in many positive ways, automating tasks, helping to diagnose medical issues and acting as a voice-controlled virtual assistant for many. Still, there is a very real danger of misuse and unintended consequenc­es of the technology, as we saw recently in Maryland, with the filing of what is believed to be the first criminal case against someone for allegedly using AI to create a revenge video against an employer. Consequent­ly, government­s here, and around the world, have been grappling with the question of how best to regulate it.

Last year President Joe Biden issued an executive order on AI that establishe­d new standards for safety and security. The EO noted that AI heightens the incentive of developers to collect and exploit personal data and called on Congress to pass data privacy regulation­s. There appears to be new momentum on this front, with Democratic Sen. Maria Cantwell and Republican U.S. Rep. Cathy McMorris Rodgers, both of Washington state, having just announced a bipartisan privacy bill (The American Privacy Rights Act).

Another feature of AI that has led to calls for regulation is its potential to heighten bias and discrimina­tion. There have been several well-publicized instances of bias in algorithms that are entrusted with highly consequent­ial decisions (e.g., predicting how sick a patient is). Independen­t bias auditing has been proposed as a potential solution. Other proposals seek to protect consumers, patients, students and workers in various ways.

While the EO on AI directed relevant government agencies to take action, Congress has not enacted significan­t new laws to regulate AI. To fill the gap, several bills have been introduced in state legislatur­es. According to the National Conference of State Legislatur­es, at least 40 states, along with Puerto Rico, the Virgin Islands, and Washington, D.C., introduced AI bills in the 2024 legislativ­e session. While the bills are too varied to summarize in full, they include some important categories:

Bills addressing criminal use of AI, such as the creation of child pornograph­y, or of synthetic voice or image likenesses in an attempt to commit fraud (e.g., the distributi­on of “deepfakes” and other deceptive media to influence elections).

Bills creating disclosure requiremen­ts when content is generated or decisions are reached using AI.

Bills that restrict how automated decision tools (such as for hiring employees) are used.

Bills providing protection against discrimina­tion by AI. Within this last category are bills that reiterate existing rights (removing ambiguitie­s that arise when discrimina­tory decisions are made by algorithms instead of persons) and bills to require impact assessment­s or create standards for independen­t bias auditing. Crime, employment, education, health and insurance have been singled out for particular attention by the state legislatur­es.

In the absence of federal AI regulation, concern is growing that we are headed toward a system of patchwork legislatio­n coupled with weak enforcemen­t. There is the additional danger of a race to the bottom if states try to attract business by promising a lax regulatory environmen­t. An argument can be made for avoiding heavyhande­d regulation. For instance, United Kingdom Prime Minister Rishi Sunak has asserted that “The UK’s answer is not to rush to regulate … we

Last year President Joe Biden issued an executive order on AI that establishe­d new standards for safety and security

believe in innovation … And in any case, how can we write laws that make sense for something we don’t yet fully understand?”

While Sunak’s premise that AI is not sufficient­ly well understood seems flawed, the possibilit­y that regulation will hamper innovation needs to be taken seriously. This is a point made often by spokespers­ons for the tech industry. The tradeoff between protecting individual rights and hampering innovation was debated in the European Union before it settled in favor of the comprehens­ive Artificial Intelligen­ce Act. However, whether the costs of complying with regulation would in fact be so high as to materially detract from innovation is still very much an open question.

A national survey of 885 U.S. executives that I recently conducted sheds light on this question. I asked respondent­s about their perception of costs of compliance and their support for specific AI regulation proposals. This group included individual­s directly involved in making decisions related to the adoption and implementa­tion of AI within their company who are likely to be knowledgea­ble about compliance costs.

Respondent­s were asked if they supported (1) regulation­s mandating disclosure of AI use and data collection policies, (2) bias regulation­s mandating third-party auditing, and (3) mandates requiring explanatio­ns for autonomous decisions. Support for regulation was surprising­ly high; more than 70% of respondent­s either strongly supported or somewhat supported each type of regulation. This was true despite the fact that a majority of individual­s felt that complying with regulation would impose either a moderate or significan­t resource challenge. For this group, the benefits of regulation clearly outweigh compliance costs.

The bills being debated by the states are a guidepost for what is needed at the national level — disclosure of AI use, protection­s against bias and discrimina­tion by algorithms, and oversight to ensure safe and fair use of autonomous decision tools. This needs to be combined with the strengthen­ing of existing laws to cover new phenomena, such as price fixing by algorithms. The proposed data privacy bill is a welcome first step. In addition to data privacy protection­s, it includes a section on civil rights and algorithms to address some forms of discrimina­tion. By setting national standards it would simplify compliance relative to a patchwork of state laws. However, given the current political climate and calendar, there is uncertaint­y about where this draft legislatio­n is headed. There is every reason to wish it success, and from there move forward to comprehens­ive national regulation of AI. Developmen­ts in AI are taking place at too fast a pace to put off sensible regulation.

 ?? ?? Baltimore County Police Chief Robert McCullough speaks about the arrest of Dazhon Darien, Pikesville High’s athletic director, who allegedly used artificial intelligen­ce to create a fake racist recording of the school’s principal. On right is Baltimore County Executive John Olszewski, Jr.
Baltimore County Police Chief Robert McCullough speaks about the arrest of Dazhon Darien, Pikesville High’s athletic director, who allegedly used artificial intelligen­ce to create a fake racist recording of the school’s principal. On right is Baltimore County Executive John Olszewski, Jr.

Newspapers in English

Newspapers from Qatar