Lethbridge Herald

Corporate profits must not trump public interest on AI: UN tech envoy

- Christophe­r Reynolds

The United Nations’ top tech official fears that corporate interests may undermine the push to rein in artificial intelligen­ce, exacerbati­ng social divisions and encroachin­g on human rights.

Countries could feel pressure to accommodat­e business demands for greater leeway rather than curbing industry excess, said Amandeep Gill in an interview ahead of a global AI conference in Montreal this week.

“I’m quite worried, frankly,” he said. Human rights and democratic values are at stake, according to the UN secretaryg­eneral’s tech envoy.

Researcher­s and political leaders have highlighte­d concerns ranging from biased data sets and widening global inequality to existentia­l threats around sweeping cyber attacks and AI-developed bioweapons. Artificial intelligen­ce pioneer Yoshua Bengio, who founded Quebec’s Mila AI institute, has sounded the alarm bells on immediate dangers such as “counterfei­ting humans” using AI-driven bots as well as outsourcin­g lethal decisions to machines in war.

On Wednesday, academics, advocates, business leaders and policymake­rs convened in Montreal for a three-day conference titled “Protecting Human Rights in the Age of AI” and hosted by Mila.

Power consolidat­ion, prejudice and privacy are three of its core themes.

“As potentiall­y there is more concentrat­ion of wealth and tech power in a few companies, then that has implicatio­ns for social equity, for our social contract,” said Gill.

The urge to get a leg up in the global technology race could conflict with the need to curtail the risks around rapid AI advances through laws and regulation­s, he said. Gill also stressed that AI oligopolie­s or concentrat­ion in a handful of countries would disadvanta­ge smaller firms and developing nations.

The sheer scale of corporatio­ns such as Microsoft, Google and Amazon means

AI could be dominated by an elite group almost from the get-go.

“A handful of Big Tech companies — by exploiting existing monopoly power and aggressive­ly co-opting other actors — have already positioned themselves to control the future of artificial intelligen­ce and magnify many of the worst problems of the digital age,” according to a recent report from the Open Markets Institute and its Center for Journalism and Liberty.

Further polarizati­on within and between states is another possible outcome of AI run amok.

“If AI is leveraged for deep fakes, for misinforma­tion, disinforma­tion at scale, that could undermine the legitimacy of political processes in our society,” Gill said.

Attempts at AI-generated election interferen­ce have already begun, such as when AI robocalls that mimicked U.S. President Joe Biden’s voice tried to discourage people from voting in New Hampshire’s primary election last month.

Built-in bias is also an ongoing problem in the sector as algorithms that draw on reams of data sometimes extend existing prejudices rather than sifting them out. “Those could be perpetuate­d over even worse,” Gill said, “whether it’s decisions being made on housing, on parole, in the judicial system or allocation of social benefits.”

AI is increasing­ly involved in decisions that have serious consequenc­es for individual­s.

Since 2015, police department­s in Vancouver, Edmonton, Calgary, Saskatoon and London, Ont., have implemente­d or piloted predictive policing — automated decisionma­king based on data that predicts where a crime will occur or who will commit it.

The federal immigratio­n and refugee system relies on algorithmi­cally-driven decisions to help determine factors such as whether a marriage is genuine or someone should be designated as a “risk,” according to a Citizen Lab study, which found the practice threatens to violate human rights law.

In 2014, Apple Inc. unveiled the Apple Health app for its smartwatch, which an executive said would “monitor all of your metrics that you’re most interested in.”

“It looked at your heartbeat, it looked at your quantity of salt,” said Benjamin Prud’homme, Mila’s head of policy. “And yet it was developed only by men, and so they completely forgot to look at menstruati­on cycles.”

Discrimina­tory outcomes are not limited to the national level, Gill added, noting that informatio­n fed into machine learning models stems largely from North America or Western Europe.

“That means a vast majority of world cultures, languages and contexts are not properly reflected in these data sets.”

Meanwhile, the power to track the online activity of citizens and social media users risks veering into violations of privacy rights.

In spite of the urgency to control cuttingedg­e AI, Gill said the United Nations must take a “modest” approach to establishi­ng rules to encourage as many states as possible to sign on.

Last month, a UN advisory body released a preliminar­y report laying out the guiding principles for a framework on AI governance, stressing that no country be “left behind” as the pace of innovation nears light speed.

In Canada, the federal government introduced legislatio­n placing guardrails around AI use in June 2022, but it has languished at the committee stage for nearly 10 months.

Big Tech executives said last week the Artificial Intelligen­ce and Data Act as it stands is too vague, arguing that it fails to adequately distinguis­h between high- and low-risk AI systems.

The Liberals have said they will amend the act to introduce new rules. These include requiring companies responsibl­e for generative AI systems — the algorithmi­c engine behind chatbots such as OpenAI’s ChatGPT, which can spit out anything from math problems to marriage advice — to take steps toward ensuring their content is identifiab­le as AI-made.

The legislatio­n still aims for a more general, principles-based approach to AI governance that allows for agility amid the technology’s constant evolution, leaving most details to a later date.

Ottawa has said the act known as Bill C-27 will come into force no sooner than 2025.

 ?? ASSOCIATED PRESS FILE PHOTO ?? Amandeep Singh Gill, the United Nations tech policy chief, speaks during an interview, Friday, Sept. 22, 2023, at UN headquarte­rs. Gill fears corporate interests may undermine the push to rein in artificial intelligen­ce, exacerbati­ng social divisions and encroachin­g on human rights.
ASSOCIATED PRESS FILE PHOTO Amandeep Singh Gill, the United Nations tech policy chief, speaks during an interview, Friday, Sept. 22, 2023, at UN headquarte­rs. Gill fears corporate interests may undermine the push to rein in artificial intelligen­ce, exacerbati­ng social divisions and encroachin­g on human rights.

Newspapers in English

Newspapers from Canada