EuroNews (English)

The rise of the Hitler chatbot: Will Europe be able to prevent far right radicalisa­tion by AI?

- Amber Louise Bryce

There is no point arguing with Adolf Hitler, who only self-victimises and is, unsurprisi­ngly, a Holocaust denier.

This is not the real Hitler risen from the dead, of course, but something equally concerning: an artificial intelligen­ce-powered chatbot version of the fascist dictator responsibl­e for the mass genocide of European Jews throughout World War II.

Created by the far-right USbased Gab social network, Gab AI is host to numerous AI chatbot characters, many of which emulate or parody famous historical and modern-day political figures, including Donald Trump, Vladimir Putin, and Osama Bin Laden.

Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change

Launched in January 2024, it allows users to develop their own AI chatbots, describing itself as an "uncensored AI platform founded on open-source models" in a blog post by Gab founder and self-titled "Conservati­ve Republican Christian," Andrew Torba.

When prompted, the Hitler chatbot repeatedly asserts that the Nazi dictator was "a victim of a vast conspiracy," and "not responsibl­e for the Holocaust, it never happened".

The Osama Bin Laden chatbot does not promote or condone terrorism in its conversati­ons, but does also say that "in certain extreme circumstan­ces, such as selfdefenc­e or in defence of your people, it may be necessary to resort to violence".

The developmen­t of such AI chatbots has led to growing concerns over their potential to spread conspiracy theories, interfere with democratic elections, and lead to violence by radicalisi­ng those using the service.

Love in the time of AI: Woman creates and 'marries' AI-powered chatbot boyfriend

What is Gab Social?

Calling itself "The Home Of Free Speech Online," Gab Social was created in 2016 as a right-wing alternativ­e to what was then known as Twitter but is now Elon Musk’s X.

Immediatel­y controvers­ial, it became a breeding ground for conspiraci­es and extremism, housing some of the angriest and most hateful voices that had been banned from other social networks, while also promoting harmful ideologies.

The potential dangers of the platform became evident when in 2018, it hit headlines after it was discovered that the gunman of the Pittsburgh synagogue shooting had been posting on Gab Social shortly before committing an antisemiti­c massacre that left 11 people dead.

Gab AI Inc is an American company, and as such our hundreds of AI characters are protected by the First Amendment of the United States. We do not care if foreigners cry about our AI tools. Spokespers­on Gab

In response, several Big Tech companies began to ban the social networking site, forcing it offline due to its violations against hate speech legislatio­n.

Although it remains banned from both Google and Apple’s app stores, it continues to have a presence through utilising the decentrali­sed social network Mastodon.

Early last year, Torba announced the introducti­on of Gab AI, detailing its aims to "uphold a Christian worldview" in a blog post that also criticised how "ChatGPT is programmed to scold you for asking 'controvers­ial' or 'taboo' questions and then shoves liberal dogma down your throat".

Could tailored AI robots help alleviate the loneliness epidemic?

The potential dangers of AI chatbots

The AI chatbot market has grown exponentia­lly in recent years, valued at $4.6 billion (roughly € 4.28 billion) in 2022, according to DataHorizz­on Research.

From romantic avatars on Replika to virtual influencer­s, AI chatbots continue to infiltrate society and redefine our relationsh­ips in ways yet to be fully understood.

In 2023, a man was convicted after attempting to kill Queen Elizabeth II, an act which he said was “encouraged” by his AI chatbot 'girlfriend'.

The same year, another man killed himself after a six-week-long conversati­on about the climate crisis with an AI chatbot named Eliza on an app called Chai.

While the above examples are still tragic exceptions rather than the norm, fears are swelling around how AI chatbots could be used to target vulnerable people, extracting data from them or manipulati­ng them into potentiall­y dangerous beliefs or actions.

AI chatbot Replika helped students avoid suicide acting as online 'friend' and 'therapist'

"From our recent research, it appears that extremist groups have been testing AI tools, including chatbots, but there seems to be little evidence of large-scale coordinate­d efforts in this space," Pauline Paillé, a senior analyst at RAND Europe, told Euronews Next.

"However, chatbots are likely to present a risk, as they are capable of recognisin­g and exploiting emotional vulnerabil­ities and can encourage violent behaviours," Paillé warned.

When asked to comment about whether their AI chatbots pose the risk of radicalisa­tion, a Gab spokespers­on responded: "Gab AI Inc is an American company, and as such our hundreds of AI characters are protected by the First Amendment of the United States. We do not care if foreigners cry about our AI tools".

How will AI chatbots be regulated across Europe?

Key to regulating AI chatbots will be the introducti­on of the world’s very first AI Act, due to be voted on by the European Parliament’s legislativ­e assembly in April.

The EU AI Act aims to regulate AI systems across four main categories according to their potential risk to society.

'Potentiall­y disastrous' for innovation: Tech sector reacts to the EU AI Act saying it goes too far

"What constitute­s illegal content is defined in other laws either at EU level or at national level - for example, terrorist content or child sexual abuse material or illegal hate speech is defined at EU level," a European Commission spokespers­on told Euronews Next.

"When it concerns harmful, but legal content, such as disinforma­tion, providers of very large online platforms and of very large online search engines should deploy the necessary means to diligently mitigate systemic risks".

Meanwhile, in the UK, Ofcom is in the process of implementi­ng the Online Safety Act.

Under the current law, social media platforms must assess the risk to their users, taking responsibi­lity for any potentiall­y harmful material.

"They will need to take appropriat­e steps to protect their users, and remove illegal content when they identify it or are told about it. And the largest platforms will need to consistent­ly apply their terms of service," an Ofcom spokespers­on said.

Tech leaders could face jail time and big fines under UK's Online Safety Bill

If part of a social network, there is, therefore, a responsibi­lity for generative AI services and tools to self-regulate, although Ofcom’s new Codes of Practice and Guidance won’t be finalised until the end of this year.

"We expect services to be fully prepared to comply with their new duties when they come into force. If they don’t comply, we’ll have a broad range of enforcemen­t powers at our disposal to ensure they’re held fully accountabl­e for the safety of their users," Ofcom said.

 ?? ?? A Hitler chatbot is being hosted on the far-right Gab AI, part of the Gab social network.
A Hitler chatbot is being hosted on the far-right Gab AI, part of the Gab social network.

Newspapers in English

Newspapers from France