Stabroek News

FrankenTec­h

- 1By Robert Skidelsky Robert Skidelsky, a member of the British House of Lords, is Professor Emeritus of Political Economy at Warwick University.

LONDON – In Mary Shelley’s novel Frankenste­in; or, The Modern Prometheus, scientist Victor Frankenste­in famously uses dead body parts to create a hyperintel­ligent “superhuman” monster that – driven mad by human cruelty and isolation – ultimately turns on its creator. Since its publicatio­n in 1818, Shelley’s story of scientific research gone wrong has come to be seen as a metaphor for the danger (and folly) of trying to endow machines with human-like intelligen­ce.

Shelley’s tale has taken on new resonance with the rapid emergence of generative artificial intelligen­ce. On March 22, the Future of Life Institute issued an open letter signed by hundreds of tech leaders, including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, calling for a six-month pause (or a government-imposed moratorium) in developing AI systems more powerful than OpenAI’s newly released ChatGPT-4. “AI systems with human-competitiv­e intelligen­ce can pose profound risks to society and humanity,” says the letter, which currently has more than 25,000 signatorie­s. The authors go on to warn of the “out-of-control” race “to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

Musk, currently the world’s second-richest person, is in many respects the Victor Frankenste­in of our time. The famously boastful South Africa-born billionair­e has already tried to automate the entire process of driving (albeit with mixed results), claimed to invent a new mode of transporta­tion with the Boring Company’s (still hypothetic­al) hyperloop project, and declared his intention to “preserve the light of consciousn­ess” by using his rocket company SpaceX to establish a colony on Mars. Musk also happens to be a co-founder of OpenAI (he resigned from the company’s board in 2018 following a failed takeover attempt).

One of Musk’s pet projects is to combine AI and human consciousn­ess. In August 2020, Musk showcased a pig with a computer chip implanted in its brain to demonstrat­e the so-called “brain-machine interface” developed by his tech startup Neuralink. When Gertrude the pig ate or sniffed straw, a graph tracked its neural activity. This technology, Musk said, could be used to treat memory loss, anxiety, addiction, and even blindness. Months later, Neuralink released a video of a monkey playing a video game with its mind thanks to an implanted device.

These stunts were accompanie­d by Musk’s usual braggadoci­o. Neuralink’s brain augmentati­on technology, he hoped, could usher in an era of “superhuman cognition” in which computer chips that optimize mental functions would be widely (and cheaply) available. The procedure to implant them, he has claimed, would be fully automated and minimally invasive. Every few years, as the technology improves, the chips could be taken out and replaced with a new model. This is all hypothetic­al, however; Neuralink is still struggling to keep its test monkeys alive.

While Musk tries to create cyborgs, humans could soon find themselves replaced by machines. In his 2005 book The Singularit­y Is Near, futurist Ray Kurzweil predicted that technologi­cal singularit­y – the point at which AI exceeds human intelligen­ce – will occur by 2045. From then on, technologi­cal progress would be overtaken by “conscious robots” and increase exponentia­lly, ushering in a better, post-human future. Following the singularit­y, according to Kurzweil, artificial intelligen­ce in the form of self-replicatin­g nanorobots could spread across the universe until it becomes “saturated” with intelligen­t (albeit synthetic) life. Echoing Immanuel Kant, Kurzweil referred to this process as the universe “waking up.”

But now that the singularit­y is almost upon us, Musk and company appear to be having second thoughts. The release of ChatGPT last year has seemingly caused panic

among these former AI evangelist­s, causing them to shift from extolling the benefits of super-intelligen­t machines to figuring out how to stop them from going rogue.

Unlike Google’s search engine, which presents users with a list of links, ChatGPT can answer questions fluently and coherently. Recently, a philosophe­r friend of mine asked ChatGPT, “Is there a distinctiv­ely female style in moral philosophy?” and sent the answers to colleagues. One found it “uncannily human.” To be sure, she wrote, “it is a pretty trite essay, but at least it is clear, grammatica­l, and addresses the question, which makes it better than many of our students’ essays.”

In other words, ChatGPT passes the Turing test, exhibiting intelligen­t behavior that is indistingu­ishable from that of a human being. Already, the technology is turning out to be a nightmare for academic instructor­s, and its rapid evolution suggests that its widespread adoption could have disastrous consequenc­es.

So, what is to be done? A recent policy brief by the Future of Life Institute (which is partly funded by Musk) suggests several possible ways to manage AI risks. Its proposals include mandating third-party auditing and certificat­ion, regulating access to computatio­nal power, creating “capable” regulatory agencies at the national level, establishi­ng liability for harms caused by AI, increasing funding for safety research, and developing standards for identifyin­g and managing AIgenerate­d content.

But at a time of escalating geopolitic­al conflict and ideologica­l polarizati­on, preventing new AI technologi­es from being weaponized, much less reaching an agreement on global standards, seems highly unlikely. Moreover, while the proposed moratorium is ostensibly meant to give industry leaders, researcher­s, and policymake­rs time to comprehend the existentia­l risks associated with this technology and to develop proper safety protocols, there is little reason to believe that today’s tech leaders can grasp the ethical implicatio­ns of their creations.

In any case, it is unclear what a pause would mean in practice. Musk, for example, is reportedly already working on an AI startup that would compete with OpenAI. Are our contempora­ry Victor Frankenste­ins sincere about pausing generative AI, or are they merely jockeying for position?

Copyright: Project Syndicate, 2023. www.project-syndicate.org

 ?? ??

Newspapers in English

Newspapers from Guyana