Lords says AI must not be allowed to hurt or dupe people
● Union study wants workers to be involved in automation phase
Artificial intelligence must never be given autonomous power to hurt, destroy or deceive humans, a parliamentary report has said.
Ethics need to be put at the centre of the development of the emerging technology, according to the House of Lords Artificial Intelligence Committee.
The study said international standards needed to be set in place with Britain poised to become a world leader in the controversial technological field.
Chancellor Philip Hammond committed more than £500 million to investment in new technologies like AI, 5G and full-fibre broadband in November’s autumn budget.
Peers said AI needed to be developed for the common good and the “autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence”.
The declaration came as a Scottish Trades Union Congress (STUC) study stressed workers must be involved in how automation was introduced to avoid potentially severe consequences for employees.
The Scottish Government and STUC found there were both positives and negatives to automation, including job losses caused by the closure of bank branches due to a rise in internet banking.
The House of Lords committee report stressed AI should not be used to diminish the data rights of individuals. People “should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence”.
The report said: “Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created.
“Significant Government investment in skills and training will be necessary to mitigate the negative effects of AI. Retraining will become a lifelong necessity.”
Committee chairman Lord Clement-jones said: “The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’S ethical development, rather than passively accept its consequences.
“The UK contains leading AI companies, a dynamic academic research culture and a vigorous start-up ecosystem as well as a host of legal, ethical, financial and linguistic strengths.
“We should make the most of this environment, but it is essential that ethics take centre stage in AI’S development and use.”
The report said transparency in the technology was needed and the AI Council should establish a voluntary mechanism to inform consumers when AI was being used to make significant or sensitive decisions.
0 A Lords study says international standards must be set on artificial intelligence
“It is not currently clear whether existing liability law will be sufficient when AI systems malfunction or cause harm to users and clarity in this area is needed,” the report said.
“The committee recommend that the Law Commission investigate this issue.”
Unions have called for more workplace control of new technologies to avoid the potentially severe impact of automation.
STUC general secretary Grahame Smith said: “Automation represents a major challenge to how work is organised, but it is still unclear how it will affect the quality and type of work in the long term.”