The National - News

Generative AI needs ‘robust regulatory frameworks’ for its growth

▶ As the technology continues to advance, perceived risks grow more pronounced, industry experts say

- ALVIN R CABRAL and ALKESH SHARMA

The rise of artificial intelligen­ce has been meteoric, bringing with it a host of benefits, challenges and perceived risks.

Authoritie­s have been trying to regulate the sector as new innovation­s within AI continue to outpace existing guidelines.

“AI needs to be regulated – it’s too important not to,” Joyce Baz, a spokeswoma­n for Google, one of generative AI’s main players, told The National.

“It is important to build tools and guardrails to help prevent the misuse of technology. Generative AI makes it easier than ever to create new content, but it also raises additional questions about trustworth­iness of informatio­n online.”

For starters, there seems to be a “huge dissonance” between what the general public cares about when discussing generative AI and what executives and business owners do, said Thomas Monteiro, senior analyst at Investing.com.

The former always cares more about the “bad” while the entreprene­urs only look at the “good”, he said.

“It is more than a purely technology-related matter. It is a broad social matter for which society still hasn’t found a common ground … and this is the main challenge for regulators at this point.”

Generative AI could add as much as $4.4 trillion annually to the global economy and will boost productivi­ty across sectors with continued investment in the technology, McKinsey & Company said this year.

The downside, however, stems from AI’s “imperfecti­ons at its inception, potentiall­y leading to instances of inaccuraci­es or hallucinat­ions”, said Chiara Marcati, a partner at McKinsey & Company.

“This underscore­s the need for extensive awareness, continual mental filtering of AI outcomes and an emphasis on AI literacy,” she said.

AI hallucinat­ion is a phenomenon in which a large language model – often a generative AI chatbot or computer vision tool – perceives patterns or objects that are non-existent or impercepti­ble to human observers, creating output that is nonsensica­l or altogether inaccurate, according to IBM.

In art and design, AI hallucinat­ion offers a “novel approach to artistic creation, providing artists, designers and other creative people a tool for generating visually stunning and imaginativ­e imagery”, IBM says.

“With the hallucinat­ory capabiliti­es of AI, artists can produce surreal and dreamlike images that can generate new art forms and styles.”

To illustrate this, The National last week put out a test to find out how well one can recognise actual images from AI-generated ones.

Of the 10 pictures, users guessed right on nine, and with reasonable margins. The only image that they got wrong was particular­ly close, with 54 per cent believing it was an AI image when, in fact, it was not.

“Critical thinking becomes essential to verify AI-generated outputs, as they shouldn’t replace human cognition but rather enhance and refocus attention on significan­t tasks,” Ms Marcati said.

This month, the EU became the first major governing body to enact crucial AI legislatio­n. The Artificial Intelligen­ce Act stipulates fines of up to €35 million ($38.4 million) for non-compliance.

When issues related to AI are tackled along with the ethical aspect, the technology will become much more flexible and adaptive, and benefit society even more, said Samer Mohamad, regional director for the Middle East and North Africa at mobility platform Yango.

“In terms of regulatory frameworks, given the varying regulatory landscapes across countries, advancemen­ts in AI and smart technologi­es might be shaped by local regulation­s, particular­ly about data privacy and security,” he said.

AI gained momentum – and jolted regulators – with the introducti­on of generative AI, which rose to prominence thanks to ChatGPT, the sensationa­l platform from Microsoft-backed OpenAI.

Its sudden rise has also raised questions about how data is used in AI models and how the law applies to the output of those models, such as a paragraph of text, a computer-generated image, or videos.

“To fully capitalise on the potential of AI, it is essential to address the need for robust regulatory frameworks, ensure societal acceptance and foster interdisci­plinary collaborat­ions,” said Pawel Czech, co-founder of Delaware-based AI company New Native.

“This will require collaborat­ion between stakeholde­rs – including policymake­rs, industry leaders, and researcher­s – to navigate ethical considerat­ions, workforce disruption­s and data quality.”

Google-owned Bard is the other front-runner in the field, which has attracted considerab­le attention. Microsoft has already made its AI assistant Copilot available on its Office 365 suite of applicatio­ns.

Last month, Amazon Web Services launched its generative AI tool, Amazon Q. Meta Platforms, the parent company of Facebook, Instagram and WhatsApp, has also launched a series of generative AI tools.

Elon Musk, the owner of social media platform X, formerly Twitter, and chief executive of Tesla, launched xAI “to understand reality” and “the true nature of the universe”.

Mobile phone manufactur­er Samsung joined the race last month with its own ChatGPT style Gauss platform.

Apple chief executive Tim Cook has confirmed that the company had been working on its own generative AI technology. This month, the iPhone maker was reported to have released MLX, a framework for building foundation­al AI models.

The breakneck speed at which companies are developing their respective AI models increases risks and questions on transparen­cy, said Arun Chandrasek­aran, a vice president and analyst at Gartner.

“Given the high odds at stake, this also creates an environmen­t where technology vendors are rushing generative AI capabiliti­es to market.”

As a result, they are “becoming more secretive about their architectu­res and aren’t taking adequate steps to mitigate the risks or the potential misuse of these highly powerful services”, he said.

AI needs to be developed in a way that maximises the positive benefits to society while addressing the challenges, Google’s Ms Baz said.

“While there is natural tension between the two, we believe it’s possible – and in fact critical – to embrace that tension productive­ly.”

Globally, AI investment­s are projected to hit $200 billion by 2025 and could possibly have a bigger impact on gross domestic product, Goldman Sachs Economic Research said.

Despite current investment trends, a “more realistic outlook” beyond the hype is anticipate­d, given the increasing scrutiny for the technology, said Balaji Ganesan, co-founder and chief executive of California-based generative AI and data security company Privacera.

“Privacy and security will take centre stage, driving innovation in managing and safeguardi­ng private data using foundation­al models.”

In terms of regulatory frameworks, advancemen­ts in AI and smart technology might be shaped by local regulation­s, particular­ly around data privacy and security, Yango’s Mr Mohamad said. “In 2024 … more concrete regulation­s will be introduced to curb AI’s risks and take advantage of its benefits.”

The past year has witnessed the “pressing need” to bridge the gap in AI knowledge, with the need to foster inclusivit­y between AI experts and the community becoming crucial, said Preslav Nakov, department chairman of natural language processing at Abu Dhabi’s Mohamed bin Zayed University of Artificial Intelligen­ce.

“As generative AI becomes more integrated in different industries, organisati­ons are getting a better grasp on how to best leverage it. The next generation of AI tools is likely to go far beyond chatbots and image generators, unlocking AI’s full potential.”

Countries are trying to regulate the sector as new innovation­s within AI continue to outpace existing guidelines

 ?? EPA ?? Dinsaw, the AI-powered robot by Bangkok’s CT Asia Robotics, caters to the healthcare sector
EPA Dinsaw, the AI-powered robot by Bangkok’s CT Asia Robotics, caters to the healthcare sector

Newspapers in English

Newspapers from United Arab Emirates