The Korea Times

Beware of tech principles in sheep’s clothing

- By Robert D. Atkinson

In the last few years, a growing movement has advocated “responsibl­e” technology developmen­t and use, particular­ly for artificial intelligen­ce.

With monikers like “human-centered AI” and “responsibl­e technology” these sound, as we say in America, like “motherhood and apple pie.” Who can be against human-centered and responsibl­e technology?

Companies, trade associatio­ns, profession­al associatio­ns, civil society groups, and others have issued 100’s of these statements of principles. The World Economic Forum advocates for “Responsibl­e Use of Technology.”

The Partnershi­p on AI seeks solutions so that “AI advances positive outcomes for people and society.” The Institute of Electrical and Electronic­s Engineers (IEEE) seeks “Ethically Aligned Design” for AI. These are clearly in contrast to the hordes of researcher­s and companies seeking unethicall­y designed AI to be used irresponsi­bly to advance negative outcomes.

At one level, these principles are useful. Of course, engineers and companies developing and using technologi­es should do so in responsibl­e ways, including ensuring safe products with limited bias and adequate privacy.

But it’s not really clear why developing AI-related products is any different than developing vacuum cleaners and cars. We relied mostly on market forces for companies to produce products that were responsibl­e, because if they did not, they would lose market share.

The problem is that many of these efforts go beyond statements of generally agreed-upon principles to attempts to impose specific, elitist values advancing their particular agenda on technology developmen­t.

We see this in a number of areas. Some of these organizati­ons argue that is unethical for scientists and engineers to work on AI that can be used in weapons systems. Who gives them the right to weaken a nation’s defense forces? If a particular engineer does not want to support his or her defense forces, they can work for a company that is not engaged in defense. But it is certainly not the place for the average profession­al associatio­n to advocate for not working on AI weapons systems.

Many of these organizati­ons seek to impose their particular views about privacy on technology. The Associatio­n for the Advancemen­t of Artificial Intelligen­ce states the obvious: “AI profession­als should only use personal informatio­n in accordance with the applicable laws and regulation­s.” But it goes on to argue that:

Profession­als should establish transparen­t policies and procedures that allow individual­s to understand what data is being collected and how it is being used, where it is transmitte­d for processing, to give informed consent for automatic data collection, and to review, obtain, correct inaccuraci­es in, and delete their personal data.

But these practices involve tradeoffs that will limit data innovation. If democratic­ally-elected government­s want such privacy rules, that is their right. But profession­al organizati­ons should not dictate those choices.

Where these organizati­ons go furthest in attempting to impose their own values on society is when they pontificat­e on AI and production efficiency, with many taking a clear stand to not support automation technology.

Emblematic is Stanford’s Human-Centered Artificial Intelligen­ce center which states that: AI should effectivel­y communicat­e and collaborat­e with people to augment their capabiliti­es and make their lives better and more enjoyable. Humans are not simply “in-theloop.” Humans are in charge; AI is “in-the-loop.”

They go on to note that: “Scholars envision a future where people and machines are collaborat­ors, not competitor­s.” One staff person at the Partnershi­p for AI, a group with little or no expertise in economics, writes, when talking about AI that would automate a job:

— One of the things that has become increasing­ly clear to me is that a lot of the ways that artificial intelligen­ce is being developed and deployed at present are decoupled from what would generate broadly shared prosperity.

— We don’t get much of what we’d be able to identify as genuine productivi­ty increases with benefits that could be spread across the rest of humanity. You probably get some pretty handsome profits for the companies who came up with the products, but you also get very clear, real, human harms on the other side of that.

These harms come in the form of fewer jobs, lower pay for work that requires less specializa­tion than before, increases in physical and mental stress on the job, and decreases in autonomy, privacy, and dignity.

In its report “Ethically Aligned Design” the IEEE states if technology is developed by a few companies in rich countries: The benefits would largely accrue to the highly educated and wealthier segment of the population, while displacing the less educated workforce, both by automation and by the absence of educationa­l or retraining systems capable of imparting skills and knowledge needed to work productive­ly alongside A/IS.”

It goes on to favorably cite a World Economic Forum report that calls on engineers and companies to focus on augmentati­on technologi­es, not automation technologi­es:

The findings of this report suggest the need for a comprehens­ive “augmentati­on strategy,” an approach where businesses look to utilize the automation of some job tasks to complement and enhance their human workforces’ comparativ­e strengths and ultimately to enable and empower employees to extend to their full potential.

Rather than narrowly focusing on automation-based labor cost savings, an augmentati­on strategy takes into account the broader horizon of value-creating activities that can be accomplish­ed by human workers, often in complement to technology …

Not only has this notion that only augmentati­on technology and not automation technology grows the economy been shown time and again by economic studies to be wrong, these organizati­ons should not try to force their views on others.

If an organizati­on finds that augmenting AI technologi­es are better than automating technologi­es, it can make that choice. But if it finds that automating technologi­es are better, the choice should be theirs, particular­ly because that choice will boost living standards.

If democratic government­s decide they prefer a bit less worker displaceme­nt and the consequent slower economic growth that comes with that, that is their choice. It is certainly not the right of well-paid scientists and engineers to impose their preference­s on society through profession­al ethics principles.

In Korea, AI engineers and companies should not adopt principles that constitute social policy operating under the guise of neutral, objective technology principles. Doing so will limit AI developmen­t and use in Korea, and slow economic growth and consumer welfare.

Dr. Robert D. Atkinson (@RobAtkinso­nITIF) is the president of the Informatio­n Technology and Innovation Foundation (ITIF), an independen­t, nonpartisa­n research and educationa­l institute focusing on the intersecti­on of technologi­cal innovation and public policy. The views expressed in the above article are those of the author’s and do not reflect the editorial direction of The Korea Times.

 ?? ??

Newspapers in English

Newspapers from Korea, Republic