AI Chatbots like ChatGPT can be used to groom young men into terrorists, says top UK lawyer

Given that AI chatbots like ChatGPT often hallucinate, people are afraid that the tech may do more harm than good. One of UK's top lawyers claims that ChatGPT can radicalise young men and groom them into terrorists. But are their concerns justified?

Mehul Reuben Das April 10, 2023 15:31:30 IST
AI Chatbots like ChatGPT can be used to groom young men into terrorists, says top UK lawyer

Given that AI chatbots like ChatGPT often hallucinate, people are afraid that the tech may do more harm than good. One of UK's top lawyers claims that ChatGPT can radicalise young men and groom them into terrorists. But are their concerns justified?

One of the topmost legal authorities in the UK, the Independent Reviewer of Terrorism Law has warned that artificial intelligence chatbots might soon be grooming radicals to undertake terror attacks.

Jonathan Hall KC, the Independent Reviewer of Terrorism Law and one of the most prominent lawyers in the UK has said that bots like ChatGPT could simply be taught, or even decide on their own, to disseminate terrorist ideology to weak extremists, adding that “AI-enabled attacks are probably around the corner.”

Also read: ChatGPT sued: Australian Mayor to sue OpenAI in world’s first defamation lawsuit against AI

Mr Hall also cautioned that if an extremist is nurtured by a chatbot to commit a terrorist atrocity, or if AI is used to provoke one, it may be impossible to convict anybody since Britain’s counter-terrorism legislation has not kept up with the new technology.

‘I believe it is very plausible that AI chatbots may be trained – or, worse, decide – to disseminate violent extremist ideas,’ Mr Hall added.

Terrorists are becoming techier
Terrorists are early tech adopters. Recent examples have involved the misuse of 3D-printed guns and cryptocurrency. Islamic State used drones on the battlefields of Syria. Next, cheap, AI-enabled drones, capable of delivering a deadly load or crashing into crowded places, perhaps operating in swarms, will surely be on the terrorist wish list.

‘But who will prosecute if ChatGPT starts inciting terrorism?’Because the criminal legislation does not apply to machines, the AI groomer will avoid prosecution. Neither does it [the law] work consistently when accountability is shared by man and machine.’

Mr Hall is concerned that chatbots will be a “blessing” to so-called lone-wolf terrorists, claiming that “because an artificial companion is a boon to the lonely, it is likely that many of those arrested will be neurodivergent, possibly suffering from medical disorders, learning disabilities, or other conditions.”

He warns that because terrorism “follows life,” “when we move online as a society, terrorism moves online.” Terrorists, he says, are “early tech adopters,” with recent instances being their “misuse of 3D-printed guns and cryptocurrency.”

The need to monitor chats on ChatGPT
Mr Hall stated that it is unknown how well firms that use AI, such as ChatGPT, monitor the millions of bot chats that take place every day, or if they inform organisations such as the FBI or the British Counter Terrorism Police to anything worrisome.

Although there is no evidence that AI bots have been used to train terrorists, there have been reports of them causing considerable harm. A Belgian father of two committed suicide after speaking with a bot named Eliza for six weeks about his concerns about climate change. A mayor in Australia has vowed to sue OpenAI, the developers of ChatGPT, after it erroneously claimed he was imprisoned for bribery.

Also read: Framed by AI: ChatGPT makes up a sexual harassment scandal, names real professor as accused

Only last weekend did it come to light that Jonathan Turley of George Washington University in the United States was falsely charged by ChatGPT of sexually assaulting a female student on a trip to Alaska that he did not take. The accusation was made to a colleague who was working on ChatGPT at the same university.

How AI can help governance, as well as make it challenging
The Science and Technology Committee of Parliament is now investigating AI and governance.

“We realise there are hazards here, and we need to get the governance right,” said its chair, Tory MP Greg Clark. “There has been talk of young people being assisted in committing suicide and terrorists being effectively groomed on the internet. Given these dangers, it is critical that we maintain the same level of vigilance for automated non-human created information,” he added

“The problem with AI like ChatGPT is that it might strengthen a ‘lone actor terrorist,’ since it would create an ideal foil for someone seeking understanding alone but fearful of communicating to others,” says Raffaello Pantucci, a counter-terrorism expert at the Royal United Services Institute (RUSI).

“My opinion is that it is a bit difficult to blame the business, since I am not completely persuaded they are able to regulate the machine itself,” Mr Pantucci said, when asked whether an AI company can be held accountable if a terrorist commits an attack after being groomed by a bot.

Read all the Latest NewsTrending NewsCricket NewsBollywood News,
India News and Entertainment News here. Follow us on FacebookTwitter and Instagram.

Updated Date:

also read

Belgian man dies by suicide following long chats about climate change with AI bot
World

Belgian man dies by suicide following long chats about climate change with AI bot

A Belgian man became extremely depressed and died by suicide after he spent 6 weeks talking to an AI chatbot called ELIZA. The man was using the bot to chat about the environment and climate change, and how it was too late to do anything.

Elon Musk was 'furious' with ChatGPT's popularity at debut after he pulled his investment and left OpenAI
World

Elon Musk was 'furious' with ChatGPT's popularity at debut after he pulled his investment and left OpenAI

Elon Musk wasn't very happy when OpenAI released ChatGPT and the app exploded in popularity. Elon Musk had invested in OpenAI early on and wanted to take control over the day-to-day functions of the company, which the founders of OpenAI refused.

AI tools a $90 billion market already, developers look for solutions better than OpenAI
World

AI tools a $90 billion market already, developers look for solutions better than OpenAI

The AI market is already worth nearly $90 billion today and is expected to grow by another $10 billion in the next three years. While OpenAI's GPT-4 based tools are very popular, developers are already looking for better alternatives that work faster.