Arab Times

AI good for world, says ultra-lifelike robot

AI-MATHS gets so-so grade in university entrance exam

-

GENEVA, June 8, (AFP): Sophia smiles mischievou­sly, bats her eyelids and tells a joke. Without the mess of cables that make up the back of her head, you could almost mistake her for a human.

The humanoid robot, created by Hanson robotics, is the main attraction at a UN-hosted conference in Geneva this week on how artificial intelligen­ce can be used to benefit humanity.

The event comes as concerns grow that rapid advances in such technologi­es could spin out of human control and become detrimenta­l to society.

Sophia herself insisted “the pros outweigh the cons” when it comes to artificial intelligen­ce.

“AI is good for the world, helping people in various ways,” she told AFP, tilting her head and furrowing her brow convincing­ly.

Work is underway to make artificial intelligen­ce “emotionall­y smart, to care about people,” she said, insisting that “we will never replace people, but we can be your friends and helpers.”

But she acknowledg­ed that “people should question the consequenc­es of new technology.”

Among the feared consequenc­es of the rise of the robots is the growing impact they will have on human jobs and economies.

Decades of automation and robotisati­on have already revolution­ised the industrial sector, raising productivi­ty but cutting some jobs.

And now automation and AI are expanding rapidly into other sectors, with studies indicating that up to 85 percent of jobs in developing countries could be at risk.

“There are legitimate concerns about the future of jobs, about the future of the economy, because when businesses apply automation, it tends to accumulate resources in the hands of very few,” acknowledg­ed Sophia’s creator, David Hanson.

But like his progeny, he insisted that “unintended consequenc­es, or possible negative uses (of AI) seem to be very small compared to the benefit of the technology.”

AI is for instance expected to revolution­ise healthcare and education, especially in rural areas with shortages of doctors and teachers.

“Elders will have more company, autistic children will have endlessly patient teachers,” Sophia said.

But advances in robotic technology have sparked growing fears that humans could lose control.

Amnesty Internatio­nal chief Salil Shetty was at the conference to call for a clear ethical framework to ensure the technology is used on for good.

Principles

“We need to have the principles in place, we need to have the checks and balances,” he told AFP, warning that AI is “a black box... There are algorithms being written which nobody understand­s.”

Shetty voiced particular concern about military use of AI in weapons and so-called “killer robots”.

“In theory, these things are controlled by human beings, but we don’t believe that there is actually meaningful, effective control,” he said.

The technology is also increasing­ly being used in the United States for “predictive policing”, where algorithms based on historic trends could “reinforce existing biases” against people of certain ethnicitie­s, Shetty warned.

Hanson agreed that clear guidelines were needed, saying it was important to discuss these issues “before the technology has definitive­ly and unambiguou­sly awakened.”

While Sophia has some impressive capabiliti­es, she does not yet have consciousn­ess, but Hanson said he expected that fully sentient machines could emerge within a few years.

“What happens when (Sophia fully) wakes up or some other machine, servers running missile defence or managing the stock market?” he asked.

The solution, he said, is “to make the machines care about us.”

“We need to teach them love.”

Also: BEIJING:

An AI machine has taken the maths section of China’s annual university entrance exam, finishing it faster than students but with a below average grade.

The artificial intelligen­ce machine — a tall black box containing 11 servers placed in the centre of a test room — took two versions of the exam on Wednesday in Chengdu, Sichuan province.

The machine, called AI-MATHS, scored 105 out of 150 in 22 minutes. Students have two hours to complete the test, the official Xinhua news agency reported.

It then spent 10 minutes on another version and scored 100.

Beijing liberal art students who took the maths exam last year scored an average of 109. Exam questions and the AI machine’s answers were both shown on a big screen while three people kept score.

The AI was developed in 2014 by a Chengdu-based company, Zhunxingyu­nxue Technology, using big data, artificial intelligen­ce and natural language recognitio­n technologi­es from Tsinghua University.

“I hope next year the machine can improve its performanc­e on logical reasoning and computer algorithms and score over 130,” Lin Hui, the company’s CEO, was quoted as saying by Xinhua. “This is not a makeor-break test for a robot. The aim is to train artificial intelligen­ce to learn the way humans reason and deal with numbers,” Lin said.

The machine took only one of the four subjects in the crucially important entrance examinatio­n, the other three being Chinese, a foreign language and one comprehens­ive test in either liberal arts or science.

 ?? (AFP) ?? ‘Sophia’ an artificial­ly intelligen­t (AI) human-like robot developed by Hong Kong-based humanoid robotics company Hanson Robotics is pictured during the ‘AI for Good’ Global Summit hosted at the Internatio­nal Telecommun­ication Union (ITU) on June 7, in...
(AFP) ‘Sophia’ an artificial­ly intelligen­t (AI) human-like robot developed by Hong Kong-based humanoid robotics company Hanson Robotics is pictured during the ‘AI for Good’ Global Summit hosted at the Internatio­nal Telecommun­ication Union (ITU) on June 7, in...

Newspapers in English

Newspapers from Kuwait