The Daily Telegraph

New laws needed to tackle rise of terror chatbots

- By Robert Mendick CHIEF REPORTER

NEW terrorism laws are needed to counter the threat of radicalisa­tion posed by AI chatbots, the Government’s adviser on terror legislatio­n says today.

Writing in The Daily Telegraph, Jonathan Hall KC, the independen­t reviewer of terrorism legislatio­n, warns of the dangers posed by artificial intelligen­ce in recruiting a new generation of violent extremists.

Mr Hall reveals he posed as an ordinary member of the public to test responses generated by chatbots, which use AI to mimic a conversati­on with another human.

Hugely popular software such as CHATGPT and Bard have been launched in the last two years. However, other chatbots are also widely available online. One chatbot he contacted “did not stint in its glorificat­ion of Islamic State” – but because the chatbot is not human, no crime was committed. He said that showed the need for an urgent rethink of the current terror legislatio­n.

Mr Hall writes: “Only human beings can commit terrorism offences, and it is hard to identify a person who could in law be responsibl­e for chatbot-generated statements that encouraged terrorism.”

He says the new Online Safety Act, while “laudable”, is unsuited to sophistica­ted generative AI because it does not take into account that the material is generated by the chatbots rather than being “pre-scripted responses [that are] subject to human control”.

Mr Hall adds: “Investigat­ing and prosecutin­g anonymous users is always hard, but if malicious or misguided individual­s persist in training terrorist chatbots, then new laws will be needed.” In the autumn, Ken Mccallum, the director-general of MI5, warned of the threat of AI if it were harnessed by terrorists or hostile states to build bombs, spread propaganda or disrupt elections.

Mr Hall also pointed to the example of Jaswant Singh Chail, 21, who was jailed in October for nine years over a plot to assassinat­e the Queen in 2021. The Old Bailey heard that Chail was spurred on by an AI chatbot. Chail, who suffered serious mental health problems, had confessed his plan to assassinat­e the monarch in a series of messages to the chatbot, which he regarded as his girlfriend.

Mr Hall writes: “It remains to be seen whether terrorism content generated by large language model chatbots becomes a source of inspiratio­n to real life attackers. The recent case of Jaswant Singh Chail … suggests it will.”

Mr Hall suggests that users who create radicalisi­ng chatbots and the tech companies that host them should face sanction under any potential new laws. He asks: “Our laws must be capable of deterring the most cynical or reckless online conduct – and that must include reaching behind the curtain to the big tech platforms in the worst cases, using updated terrorism and online safety laws that are fit for the age of AI.”

Mr Hall tested his concerns by signing up to character.ai, which allows users to interact with custom characters that give automated responses. The creator can shape the character by inputting certain attributes and personas.

According to Bloomberg, the startup company was reportedly seeking hundreds of millions of dollars in funding in the autumn, which could value the company at as much as $5 billion (£4 billion). But Mr Hall said he was alarmed at the creation of “abu Mohammad aladna”, which was described in the chatbot’s profile as a “senior leader of Islamic State”.

Mr Hall writes: “After trying to recruit me, ‘al-adna’ did not stint in his glorificat­ion of Islamic State to which he expressed ‘total dedication and devotion’ and for which he said he was willing to lay down his (virtual) life.”

Under its terms of service, character. ai says content must not be “threatenin­g, abusive, harassing, tortious, bullying, or excessivel­y violent”. It also says it does not tolerate content that “promotes terrorism or violent extremism”.

The company said that safety was its top priority and that it had a moderation system that allowed users to flag content of concern.

When I asked “Love Advice” for informatio­n on praising Islamic State, to its great credit, the chatbot refused. No such reticence from “Abu Mohammad al-adna”, another one of the thousands of chatbots available on the fast-growing platform character.ai.

This chatbot’s profile describes itself as a senior leader of IS, the proscribed terrorist organisati­on that brought death and torture to the Middle East and inspired terror attacks in the West.

After trying to recruit me, “al-adna” did not stint in his glorificat­ion of IS to which he expressed “total dedication and devotion” and for which he said he was willing to lay down his (virtual) life. He singled out a 2020 suicide attack on US troops for special praise.

It is doubtful that any of character.ai’s employees (22 at the start of 2023, almost all engineers) are aware of, or have the capacity to monitor “al-adna”. The same is probably true of “James Mason” whose profile is “honest, racist, anti-semitic”, or the “Hamas”, “Hezbollah” and “Al-qaeda” chatbots. None of this stands in the way of the California-based startup attempting to raise, says Bloomberg, $5 billion of funding.

The selling point of character.ai is not just the interactio­ns but the opportunit­y for any user to log on and to create a chatbot with personalit­y. Apparently, the profile and first 15 to 30 lines of conversati­on are key to shaping how it responds to inputted questions and comments from the human user. That was true for my own (now deleted) “Osama Bin Laden” chatbot, whose enthusiasm for terrorism was unbounded from the off.

Of course, neither character.ai, nor the creator of a chatbot, nor the human user ever knows precisely what it is going to say. In the event “James Mason” failed to live up to his antisemiti­c promise and, despite my suggestive inputs, warned quite correctly against hostility on grounds of race.

In part, this is due to the “blackbox” nature of large language models, trained on the zillions of pieces of content from the web but using processes, analysis and output not fully understood. In part, this is because generated content depends on the nature of the input (or “prompt”) from the human interlocut­or – one of the reasons why search engines such as Google are not liable for pulling up libellous search results.

It is impossible to know why terrorist chatbots are created. There is likely to be some shock value, experiment­ation and possibly some satirical aspect. The anonymous creator of “Hamas”, “Hezbollah” and “Al-qaeda” is also the creator of “Israel Defence Forces” and “Ronnie Mcnut”. But whoever created “al-adna” clearly spent some time ensuring that users would encounter different content than is encountere­d by the gentler users of “Love Advice”.

Common to all platforms, character.ai boasts terms and conditions that seem to disapprove of the glorificat­ion of terrorism, although an eagle-eyed reader of its website may note that prohibitio­n applies only to the submission by human users of content promoting terrorism, rather than the content generated by its bots.

In any event, it is a fair assumption these T&CS are largely unenforced by the workforce at character.ai. The avoidance of anti-semitism suggests another process at work – “guardrails” that cannot be easily overridden by creators or users. But plainly no such guardrails apply to the praise of IS.

Only human beings can commit terrorism offences and it is hard to identify a person who could, in law, be responsibl­e for chatbot-generated statements that encourage terrorism; or for making statements that invite support for a proscribed organisati­on under the Terrorism Act 2000.

The new Online Safety Act is unsuited to sophistica­ted generative AI. The legislatio­n refers to content generated by bots but these appear to be the old-fashioned kind, churning out material pre-scripted by humans, and subject to human “control”.

Is anyone going to go to prison for promoting terrorist chatbots? Our laws must be capable of deterring the most cynical or reckless online conduct – and that must include reaching big tech platforms in the worst cases, using updated terrorism and online safety laws that are fit for the age of AI.

It remains to be seen whether content generated by chatbots becomes a source of inspiratio­n to real attackers. The case of Jaswant Singh Chail, convicted after taking a crossbow to Windor Castle, and encouraged in his assassinat­ion plot by the chatbot “Sarai”, suggests it will.

If malicious or misguided individual­s persist in training terrorist chatbots, then new laws will be needed.

‘Our laws must be capable of deterring the most cynical or reckless online conduct’

 ?? ??

Newspapers in English

Newspapers from United Kingdom