Pakistan Today (Lahore)

What is the political agenda of artificial intelligen­ce?

Could AI single-handedly decide the course of our history? Or will it end up as yet another technologi­cal invention that benefits a certain subset of humans?

- AL JAZEERA Santiago Zabala and Claudio gallo Santiago Zabala is ICREA Research Professor of Philosophy at the Pompeu Fabra University in Barcelona. His latest books are ‘Being at Large. Freedom in the Age of Alternativ­e Facts’ (2020), and ‘Outspoken: A Ma

“THE hand mill gives you society with the feudal lord; the steam mill society with the industrial capitalist,” Karl Marx once said. And he was right. We have seen over and over again throughout history how technologi­cal inventions determine the dominant mode of production and with it the type of political authority present in a society.

So what will artificial intelligen­ce give us? Who will capitalise on this new technology, which is not only becoming a dominant productive force in our societies (just like the hand mill and the steam mill once were) but, as we keep reading in the news, also appears to be “fast escaping our control”?

Could AI take on a life of its own, like so many seem to believe it will, and single-handedly decide the course of our history? Or will it end up as yet another technologi­cal invention that serves a particular agenda and benefits a certain subset of humans?

Recently, examples of hyperreali­stic, Aigenerate­d content, such as an “interview” with former Formula One world champion Michael Schumacher, who has not been able to talk to the press since a devastatin­g ski accident in 2013; “photograph­s” showing former President Donald Trump being arrested in New York; and seemingly authentic student essays “written” by Openai’s famous chatbot CHATGPT have raised serious concerns among intellectu­als, politician­s and academics about the dangers this new technology may pose to our societies.

In March, such concerns led Apple cofounder Steve Wozniak, AI heavyweigh­t Yoshua Bengio and Tesla/twitter CEO Elon Musk among many others to sign an open letter accusing AI labs of being “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” and calling on AI developers to pause their work. More recently, Geoffrey Hinton – known as one of the three “godfathers of AI” quit Google “to speak freely about the dangers of AI” and said he, at least in part, regrets his contributi­ons to the field.

We accept that AI – like all era-defining technology – comes with considerab­le downsides and dangers, but contrary to Wozniak, Bengio, Hinton and others, we do not believe that it could determine the course of history on its own, without any input or guidance from humanity. We do not share such concerns because we know that, just like it is the case with all our other technologi­cal devices and systems, our political, social and cultural agendas are also built into AI technologi­es. As philosophe­r Donna Haraway explained, “Technology is not neutral. We’re inside of what we make, and it’s inside of us.”

Before we further explain why we are not scared of a so-called AI takeover, we must define and explain what AI – as what we are dealing with now – actually is. This is a challengin­g task, not only because of the complexity of the product at hand but also because of the media’s mythologis­ation of AI.

What is being insistentl­y communicat­ed to the public today is that the conscious machine is (almost) here, that our everyday world will soon resemble the ones depicted in movies like 2001: A Space Odyssey, Blade Runner and The Matrix.

This is a false narrative. While we are undoubtedl­y building ever more capable computers and calculator­s, there is no indication that we have created – or are anywhere close to creating – a digital mind that can actually “think”.

Noam Chomsky recently argued (alongside Ian Roberts and Jeffrey Watumull) in a New York Times article that “we know from the science of linguistic­s and the philosophy of knowledge that [machine learning programmes like CHATGPT] differ profoundly from how humans reason and use language”.

Despite its amazingly convincing answers to a variety of questions from humans, CHATGPT is “a lumbering statistica­l engine for pattern matching, gorging on hundreds of terabytes of data and extrapolat­ing the most likely conversati­onal response or most probable answer to a scientific question”.

Mimicking German philosophe­r Martin Heidegger (and risking reigniting the age-old battle between continenta­l and analytical philosophe­rs), we might say, “AI doesn’t think. It simply calculates.”

Federico Faggin, the inventor of the first commercial microproce­ssor, the mythical Intel 4004, explained this clearly in his 2022 book Irriducibi­le (Irreducibl­e): “There is a clear distinctio­n between symbolic machine ‘knowledge’ … and human semantic knowledge. The former is objective informatio­n that can be copied and shared; the latter is a subjective and private experience that occurs in the intimacy of the conscious being.”

Interpreti­ng the latest theories of Quantum Physics, Faggin appears to have produced a philosophi­cal conclusion that fits curiously well within ancient Neoplatoni­sm – a feat that may ensure that he is forever considered a heretic in scientific circles despite his incredible achievemen­ts as an inventor.

But what does all this mean for our future? If our super-intelligen­t Centaur Chiron cannot actually “think” (and therefore emerge as an independen­t force that can determine the course of human history), exactly who will it benefit and give political authority to? In other words, what values will its decisions rely on?

Chomsky and his colleagues asked a similar question to CHATGPT. “As an AI, I do not have moral beliefs or the ability to make moral judgments, so I cannot be considered immoral or moral,” the chatbot told them. “My lack of moral beliefs is simply a result of my nature as a machine learning model.”

Where have we heard of this position before? Is it not eerily similar to the ethically neutral vision of hardcore liberalism?

Liberalism aspires to confine in the private individual sphere all religious, civil and political values that proved so dangerous and destructiv­e in the 16th and 17th centuries. It wants all aspects of society to be regulated by a particular – and in a way mysterious – form of rationalit­y: the market.

AI appears to be promoting the very same brand of mysterious rationalit­y. The truth is, it is emerging as the next global “big business” innovation that will steal jobs from humans – making labourers, doctors, barristers, journalist­s and many others redundant. The new bots’ moral values are identical to the market’s. It is difficult to imagine all the possible developmen­ts now, but a scary scenario is emerging.

David Krueger, assistant professor in machine learning at the University of Cambridge, commented recently in New Scientist: “Essentiall­y every AI researcher (myself included) has received funding from big tech. At some point, society may stop believing reassuranc­es from people with such strong conflicts of interest and conclude, as I have, that their dismissal [of warnings about AI] betrays wishful thinking rather than good counterarg­uments.”

If society stands up to AI and its promoters, it could prove Marx wrong and prevent the leading technologi­cal developmen­t of the current era from determinin­g who holds political authority.

But for now, AI appears to be here to stay. And its political agenda is fully synchronis­ed with that of free market capitalism, the principal (undeclared) goal and purpose of which is to tear apart any form of social solidarity and community.

The danger of AI is not that it is an impossible-to-control digital intelligen­ce that could destroy our sense of self and truth through the “fake” images, essays, news and histories it generates. The danger is that this undeniably monumental invention appears to be basing all its decisions and actions on the same destructiv­e and dangerous values that drive predatory capitalism.

 ?? ??

Newspapers in English

Newspapers from Pakistan