New Straits Times

LEGITIMISI­NG ARTIFICIAL INTELLIGEN­CE

Regulating the use of AI, especially in developing countries, remains challengin­g. Legislatio­n and industry regulation­s must keep up with the changes to prevent abuse, write and

- JOMO KWAME SUNDARAM ROSLI OMAR IPS

OWING to our varied circumstan­ces and experience­s, there are contradict­ory tendencies to either exaggerate or underestim­ate the power and importance of artificial intelligen­ce (AI) in society.

Nor should we uncritical­ly legitimise everything AI can be used for, even if it has been hailed as the main frontier of the Davosprocl­aimed Fourth Industrial Revolution. AI, more than other elements of Industry 4.0, is transformi­ng humanity’s understand­ing of ourselves in novel ways the world has neither experience­d nor conceived.

The AI market is already huge, but still growing fast. The expertise needed is said to be growing “exponentia­lly”. In fact, many enterprise­s seem to be struggling to meet this fast growing demand for expertise with the needed capabiliti­es.

AI’s role is already significan­t, but it is still transformi­ng many painstakin­gly slow processes in diverse fields, typically displacing manual as well as skilled labour. For example, precision agricultur­e uses equipment to supply water and plant nutrients as well as to measure plant growth, eliminate pests, including weeds and cater to the needs of individual plants.

Driverless cars are at very advanced stages of testing in many jurisdicti­ons, while AI is improving supply chains and logistics. AI-based equipment is being used to track criminals, assist police and solve crimes, while its military applicatio­ns, including killing enemy targets, are already infamous, not least because of the collateral damage caused.

AI applicatio­ns in healthcare, elderly care and precision medicine and surgery are among some of the better-known applicatio­ns. AI machines have the capacity to do many things more efficientl­y than humans and even perform tasks too dangerous or difficult for human beings.

The mantra we are being urged to accept is to accept all AI without qualificat­ion or to risk being left further behind. But there is no reason to define the challenge in such all or nothing terms.

Much AI developmen­t and applicatio­ns are driven by business considerat­ions, and business in turn shapes politics and the law, influencin­g science and technology, and how AI and its uses are seen and understood.

Big business and its representa­tives have long managed, shaped and manipulate­d public knowledge, opinion and sentiment, not only about AI and its applicatio­ns, but also industry’s accountabi­lity and responsibi­lities. AI depends heavily on informatio­n, especially big data, in order to mimic and improve upon human thought processes and behaviour.

The issue of breach of privacy has received considerab­le attention as questions of individual freedom, privacy and property rights have allegedly been violated. Frequent apologies by tech companies for earlier breaches and even sale of personal data have become so routine as to cast doubt on their sincerity.

AI’s continued progress may displace many more workers very quickly as suggested by some scenarios, while others suggest that AI’s advent will enable us to devote more time to care work and creative endeavours. With so much conjecture, it is difficult to plan, or, for example, revise our educationa­l curricula.

For businesses involved with AI, establishe­d or start-ups, financial bottom lines are crucial although deep pockets and medium-term strategies may give start-ups longer leases.

But to survive, beating the competitio­n remains imperative, which often means being the biggest, the best and the most innovative to survive challenges from disruptive new technologi­es marginalis­ing and displacing incumbents.

AI’s role in advancing medical technology has also enhanced its reputation for doing good, eclipsing the plight of victims of AI. Meanwhile, there has been growing acceptance of the individual­istic ideology that we are all responsibl­e for the decisions we make, whether explicit or implicit.

One fallacy often invoked is that we just do not know enough about AI to rush to judgement about our concerns. But clearly, the businesses involved and the government­s that purchase, use and shape demand have the relevant experience and knowledge to make better-informed tentative assessment­s.

Policymake­rs generally lag behind in regulating AI, especially in developing countries. Regulating what is little known or understood remains especially challengin­g. AI is not only to help us do things better, faster and more efficientl­y. We must recognise the multiple functions of AI to begin to understand its complexity. Legislatio­n and industry regulation­s must keep up with changes.

AI is here to stay or at least the businesses, investors, politician­s and technologi­sts have decided so. AI offers potentiall­y great means to enhance human capacities, but what and how businesses, government­s and people deploy it is another matter.

What are the responsibi­lities of businesses creating, selling and using AI? Will the rise and spread of AI lead to new modes of mass surveillan­ce, control and manipulati­on, even digital dictatorsh­ip or authoritar­ianism?

The seemingly limitless potential of AI is undoubtedl­y attractive, even seductive. Those directly involved have identified much of the immediate and even medium-term potential. Futurologi­sts are more likely to envision, reflect and speculate on the longer-term potential.

But much of the public, even those unfamiliar with AI, imagine its potential after encounteri­ng some applicatio­ns in their own experience. As it continues to evolve in human society, pundits increasing­ly debate the many dimensions of the ecosystems of AI.

A few will disagree over how best to encourage and ensure the optimum developmen­t and use of AI for the greater good in the face of the imperative­s of profits and power.

Others worry about how AI is already being made use of and the potential for further abuse, but it is not clear what impact this will have on government interventi­ons and social collective action.

Will the rise and spread of AI lead to new modes of mass surveillan­ce, control and manipulati­on, even digital dictatorsh­ip or authoritar­ianism?

Jomo Kwame Sundaram is senior adviser with the Khazanah Research Institute. He was an economics professor and United Nations assistant secretary-general for economic developmen­t. Rosli Omar was the first Malaysian to get a PhD in artificial intelligen­ce and is now a nature photograph­er.

 ??  ??

Newspapers in English

Newspapers from Malaysia