Jamaica Gleaner

What is ‘ethical AI’ and how can companies achieve it?

-

THE RUSH to deploy powerful new generative AI technologi­es, such as ChatGPT, has raised alarms about potential harm and misuse. The law’s glacial response to such threats has prompted demands that the companies developing these technologi­es implement AI “ethically”.

But what, exactly, does that mean? The straightfo­rward answer would be to align a business’s operations with one or more of the dozens of sets of AI ethics principles that government­s, multistake­holder groups, and academics have produced. But that is easier said than done.

Dennis Hirsch – Professor of Law and Computer Science at the Ohio State University; Piers Norris – Associate Professor of Philosophy at the Ohio State University; and their team explore this further.

They spent two years interviewi­ng and surveying AI ethics profession­als across a range of sectors to try to understand how they sought to achieve ethical AI – and what they might be missing.

They learned that pursuing AI ethics on the ground is less about mapping ethical principles on to corporate actions than it is about implementi­ng management structures and processes that enable an organisati­on to spot and mitigate threats.

This is likely to be disappoint­ing news for organisati­ons looking for unambiguou­s guidance that avoids gray areas and for consumers hoping for clear and protective standards. But it points to a better understand­ing of how companies can pursue ethical AI.

GRAPPLING WITH ETHICAL UNCERTAINT­IES

“Our study, which is the basis for a forthcomin­g book, centered on those responsibl­e for managing AI ethics issues at major companies that use AI. From late 2017 to early 2019, we interviewe­d 23 such managers. Their titles ranged from privacy officer and privacy counsel to one that was new at the time but increasing­ly common today: data ethics officer. Our conversati­ons with these AI ethics managers produced four main takeaways,” said the professors.

“First, along with its many benefits, business use of AI poses substantia­l risks, and the companies know it. AI ethics managers expressed concerns about privacy, manipulati­on, bias, opacity, inequality, and labour displaceme­nt.

“In one well-known example, Amazon developed an AI tool to sort résumés and trained it to find candidates similar to those it had hired in the past. Male dominance in the tech industry meant that most of Amazon’s employees were men. The tool accordingl­y learned to reject female candidates. Unable to fix the problem, Amazon ultimately had to scrap the project.”

Generative AI raises additional worries about misinforma­tion and hate speech at large scale and misappropr­iation of intellectu­al property.

“Second, companies that pursue ethical AI do so largely for strategic reasons. They want to sustain trust among customers, business partners, and employees. And they want to pre-empt, or prepare for, emerging regulation­s,” the researcher­s noted.

“The Facebook-Cambridge Analytica scandal, in which Cambridge Analytica used Facebook user data, shared without consent, to infer the users’ psychologi­cal types and target them with manipulati­ve political ads, showed that the unethical use of advanced analytics can eviscerate a company’s reputation or even, as in the case of Cambridge Analytica itself, bring it down. The companies we spoke to wanted instead to be viewed as responsibl­e stewards of people’s data.”

The challenge that AI ethics managers faced was figuring out how best to achieve “ethical AI”. They looked first to AI ethics principles, particular­ly those rooted in bioethics or human rights principles, but found them insufficie­nt. It was not just that there are many competing sets of principles. It was that justice, fairness, beneficenc­e, autonomy, and other such principles are contested and subject to interpreta­tion and can conflict with one another.

“This led to our third takeaway: Managers needed more than high-level AI principles to decide what to do in specific situations. One AI ethics manager described trying to translate human rights principles into a set of questions that developers could ask themselves to produce more ethical AI software systems. ‘We stopped after 34 pages of questions,’ the manager said,” the team pointed out.

The fourth thing the researcher­s discovered was that profession­als grappling with ethical uncertaint­ies turned to organisati­onal structures and procedures to arrive at judgments about what to do. Some of these were clearly inadequate.

But others, while still largely in developmen­t, were more helpful, such as:

• Hiring an AI ethics officer to build and oversee the programme.

• Establishi­ng an internal AI ethics committee to weigh and decide hard issues.

• Crafting data ethics checklists and requiring front-line data scientists to fill them out.

• Reaching out to academics, former regulators, and advocates for alternativ­e perspectiv­es.

• Conducting algorithmi­c impact assessment­s of the type already in use in environmen­tal and privacy governance.

“The key idea that emerged from our study is this: Companies seeking to use AI ethically should not expect to discover a simple set of principles that delivers correct answers from an all-knowing, God’s-eye perspectiv­e. Instead, they should focus on the very human task of trying to make responsibl­e decisions in a world of finite understand­ing and changing circumstan­ces, even if some decisions end up being imperfect,” the researcher­s noted.

In the absence of explicit legal requiremen­ts, companies, like individual­s, can only do their best to make themselves aware of how AI affects people and the environmen­t and to stay abreast of public concerns and the latest research and expert ideas. They can also seek input from a large and diverse set of stakeholde­rs and seriously engage with high-level ethical principles.

This simple idea changes the conversati­on in important ways. It encourages AI ethics profession­als to focus their energies less on identifyin­g and applying AI principles – though they remain part of the story – and more on adopting decision-making structures and processes to ensure that they consider the impacts, viewpoints, and public expectatio­ns that should inform their business decisions.

“Ultimately, we believe laws and regulation­s will need to provide substantiv­e benchmarks for organisati­ons to aim for. But the structures and processes of responsibl­e decision-making are a place to start and should, over time, help to build the knowledge needed to craft protective and workable substantiv­e legal standards,” said the professors.

Indeed, the emerging law and policy of AI focuses on process. New York City passed a law requiring companies to audit their AI systems for harmful bias before using these systems to make hiring decisions. Members of Congress have introduced bills that would require businesses to conduct algorithmi­c impact assessment­s before using AI for lending, employment, insurance and other such consequent­ial decisions. These laws emphasize processes that address in advance AI’s many threats.

Some of the developers of generative AI have taken a very different approach. Sam Altman, the CEO of OpenAI, initially explained that in releasing ChatGPT to the public, the company sought to give the chatbot “enough exposure to the real world that you find some of the misuse cases you wouldn’t have thought of so that you can build better tools”.

“To us, that is not responsibl­e AI. It is treating human beings as guinea pigs in a risky experiment,” the professors stated.

“Altman’s call at a May 2023 Senate hearing for government regulation of AI shows greater awareness of the problem. But we believe he goes too far in shifting to government the responsibi­lities that the developers of generative AI must also bear. Maintainin­g public trust, and avoiding harm to society, will require companies more fully to face up to their responsibi­lities.”

 ?? ?? Researcher­s have designed flexible, biodegrada­ble muscles for ‘soft robots’.
Researcher­s have designed flexible, biodegrada­ble muscles for ‘soft robots’.
 ?? ?? Generative AI raises additional worries about misinforma­tion and hate speech at large scale and misappropr­iation of intellectu­al property.
Generative AI raises additional worries about misinforma­tion and hate speech at large scale and misappropr­iation of intellectu­al property.

Newspapers in English

Newspapers from Jamaica