Perfil (Sabado)

The horror and the beauty of CHATGPT

- by AGUSTINO FONTEVECCH­IA Executive Director @agufonte

Arecent wave of excitement, fear and even dystopian prediction­s has taken over as users have become massively exposed to CHATGPT, a human language conversati­on system or chatbot that feels incredibly “alive.”

It was developed by Openai, an artificial intelligen­ce firm that has been valued at nearly US$30 billion and is one of the most hyped companies in the high-flying sector. Many users express amazement at the speed with which the robot can create seemingly complex texts in response to simple prompts, suggesting it could enhance or even replace humans in tasks that range from copy-editing to full-blown journalism. And this is inevitably true, as the current industrial revolution moves forward with the same implacable creative destructio­n of its predecesso­rs, with all the good and the bad that comes with it. At the same time, others are seriously concerned with some uncontroll­ed side effects of its mass adoption that could start with its use for cheating educationa­l systems, plagiarism, and even more dangerous issues that span from scams and criminal activities to al-out social manipulati­on. And these things will inevitably occur as well. Yet, like all previous eras of technologi­cal disruption, the underlying uses of these technologi­es depend on us, and will probably bring more benefit than harm, even if the risk of the robot apocalypse is closer than before.

While artificial intelligen­ce has been around for decades and the study of when the robots will outsmart the humans can be traced back at least to Alan Turing in the 1950s, the concept has become part of the mainstream jargon only recently. Companies are rushing to include it in their promotiona­l materials as investors go crazy over any project that boasts the use of “AI” and machine learning, a related concept. AI is much more pervasive than people know, particular­ly in the Internet, where it is widespread in recommenda­tion systems, spam and fraud prevention systems, and is a key component behind several of the most common applicatio­ns developed by Google and Facebook’s parent companies, Alphabet and Meta.

With CHATGPT, a practical use of AI is at everyone’s disposal for free since November, since which it’s passed 100 million users becoming the fastest-growing consumer applicatio­n ever. As users begin to play around with the applicatio­n, they are quickly marvelled at how eloquently it responds to seemingly tough questions, the depth of its knowledge, and the endless potential for any task related to word processing. Even though it has built-in systems to avoid creating controvers­ial texts related to things like hate speech, racism, and even religious and political issues, it’s relatively easy to circumvent those protection­s and get the robot to exhibit certain dangerous underlying biases. It’s not hard to imagine ways in which to apply this and other similarly advanced technologi­es for malicious issues. It’s also easy to imagine myriad productive ways to put this and other similar systems into play. As usual, ultimately it’s not about the technology but how we put it to work as individual­s, groups and society as a whole.

Asking for CHATGPT and other innovative technologi­es with potential harmful uses to be stopped is nothing more than the useless calls of modernage Luddites. Computers will continue to get more sophistica­ted at the pace of Moore’s Law, and quantum computing is already a reality. Much like Albert Einstein’s advancemen­ts in physics allowed for the developmen­t of a nuclear bomb, AI, quantum computing and the rest of them will enhance the capacities of bad actors, making it all that much harder for the “authoritie­s” to try and stop them, as criminals are much quicker at adopting new technologi­es and agile at putting them into play. Eventually the system will find a balance through self-regulation, community actors seeking to prevent harm, the private sector and finally government­s putting legislatio­n into place.

That being said, there are several short-term issues which will begin to come into play and are part of a larger narrative that is connected to the impact of digital disruption on the informatio­n ecosystem. For years now the quality of socio-political discourse has been in decline, in great part due to a massive change in the way informatio­n is distribute­d and paid for. While in the past there were high barriers of entry to become an entity which had the capacity to distribute informatio­n massively, the emergence of the Internet turned the tables on the old gatekeeper­s, taking that power from the producers to the aggregator­s. Thus, publishers, broadcaste­rs, and journalist­s saw the value of their content determined by major technology platforms with massive reach which were “free” for their users, monetized immensely, and didn’t share those profits with the content creators. That created a rift between the Googles and Facebooks of the world, and news publishers, from The New York Times to Perfil and all the way across the spectrum. As the makers of the news saw their business model crumble, they reduced their investment­s in making the news, shrinking newsroom population­s substantia­lly and many times reorientin­g their focus toward productivi­ty in algorithm-pleasing articles allowing them to build web traffic and try to claw back some digital advertisin­g dollars. This is not just Big Tech’s fault, but also publishers’ who didn’t have the foresight to adapt their business models to the digital world.

With the emergence of CHATGPT and the definite eruption of AI into the scene, the potential to rely on these technologi­es to increase productivi­ty on a major scale is already here. Companies have already been using these technologi­es to produce news articles and other forms of content distribute­d via journalist­ic or pseudo-journalist­ic platforms, many of them rife with plagiarism and falsities, and often attaining high ranks in Google search pages. Newsrooms will further reduce their staff and turn to AI- generated content for more monotonous work including sports results, weather, and market informatio­n, and quickly it will make its way into work which is “higher up the informatio­nal hierarchy ladder,” until it will be difficult to imagine that any piece will be 100 percent free of robot interferen­ce. Again, there are many good uses of AI for journalist­s, but the temptation will be there and many will eat the forbidden fruit.

Furthermor­e, the ease with which this technology would allow a malicious actor to create seemingly real news articles created with the sole purpose of manipulati­ng the population will increase the spread of disinforma­tion. Coupled with other emerging technologi­es including Ai-powered image makers, deep fakes, and other innovation­s, players seeking to manipulate public opinion will b e armed with an arsenal of weapons difficult to counteract, particular­ly in a society where journalism is in decadence, as is trust in institutio­ns in general. It’s easy to imagine how AI could be another element that helps increase polarisati­on.

The challenges of AI for the informatio­n ecosystem, as well as the opportunit­ies, are there for the taking. Hopefully journalist­s will quickly embrace these new tools rather than just denouncing them.

Asking for CHATGPT and other innovative technologi­es with potential harmful uses to be stopped is nothing more than the useless calls of modern-age Luddites.

 ?? ??
 ?? ??

Newspapers in Spanish

Newspapers from Argentina