Tech it easy on AI ..it could wipe out all of humanity!
Extinction warning from Artificial Intelligence experts
EXPERTS in Artificial intelligence have warned that the advanced technology could lead to the extinction of humanity.
A group that consists of the CEO of CHATGPT maker Openai, Sam Altman, said tackling the problem should be given the same priority as “pandemics and nuclear war”.
Scientists, tech industry leaders and computer scientist Geoffrey Hinton, who is often referred to as the godfather of AI, joined forces to issue the statement, while Demis Hassabis, who is the CEO of Google Deepmind, also supported it.
The message, which was posted to the Center for AI Safety’s website yesterday, said: “Mitigating the risk of extinction from AI should be a global priority alongside other societalscale risks such as pandemics and nuclear war.”
But speaking earlier this month, Taoiseach Leo Varadkar said AI could tackle doctor shortages and help the HSE.
The Fine Gael leader said the developments have been “absolutely fascinating” and said he believes AI has more positives than negatives.
He told The Rest Is Politics podcast: “It’s going to change our world as much as the internet has, for the better and for worse. It’s going to be fascinating.”
He added: “In terms of medical applications, AI being able to read X-rays, pathology results quicker, at lower cost, and less error than is done by doctors and scientists... To me that’s amazing. It’s already picking up patterns and markers we didn’t know existed before, just because it’s so smart.
APPLICATIONS
“It’s extraordinary if you think about it, the practical applications that are possible with AI.
“That’s all going to be better, from the point of view of the patient. From the point of view of the doctor, we may see the doctor shortage may not bring as big a problem in five or ten years, as it is now.”
And British PM Rishi Sunak believes AI has benefits for the economy and society.
He said: “You’ve seen that recently it was helping paralysed people to walk, discovering new antibiotics, but we need to make sure this is done in a way that is safe and secure.
“Now that’s why I met last week with CEOS of major AI companies to discuss what are the guardrails that we need to put in place, what’s the type of regulation that should be put in place to keep us safe.
“People will be concerned by the reports that AI poses existential risks, like pandemics or nuclear wars.”
The debate intensified recently after Elon Musk signed an open letter in March which called for “all AI labs to immediately pause”, for at least six months, the training of AI systems more powerful than GPT-4.
Apple co-founder Steve Wozniak was also in support. In that letter, it said “AI systems with human-competitive intelligence can pose profound risks to society and humanity”.
It continued: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilisation? Such decisions must not be delegated to unelected tech leaders.”
It went on to say that “powerful AI systems” should be developed only once people are confident “that their effects will be positive and their risks will be manageable”.
It said that during the six-month pause labs should “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts”.
Previously, the makers of
CHATGPT said: “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.”
It comes as Elizabeth Renieris, of Oxford’s Institute for Ethics, told the BBC: “Advancements in AI will magnify the scale of automated decision-making that is biased, discriminatory, exclusionary or otherwise unfair while also being inscrutable and incontestable.”