AI-Based Chatbots Can Grow Extremists To Carry Out Terrorist Attacks
JAKARTA - The use of artificial intelligence (AI) chatbots could influence extremists to carry out terrorist attacks in the future, warned independent reviewer of terrorism legislation, Jonathan Hall KC.
According to The Mail on Sunday, Hall stated that bots such as ChatGPT could easily be programmed or even decide on their own to spread terrorist ideology to vulnerable extremists, adding that "an AI-enabled attack is likely imminent".
Hall also warned that if an extremist is influenced by a chatbot to commit an act of terrorism or if AI is used to incite terrorism, it may be difficult to prosecute anyone, as UK counter-terrorism laws have yet to catch up to this new technology.
Hall fears that chatbots can be a "benefit" for lone terrorists, saying that "since artificial companions are a benefit for lonely people, it is likely that many who are arrested will have neurological disorders, learning disorders, or other medical conditions".
He also warned that "terrorism follows life", and therefore, "as we move online as a society, terrorism also moves online". Hall also points out that terrorists are "early adopters of technology", with recent examples including "the misuse of 3D printed weapons and cryptocurrencies".
Hall said it is not known to what extent AI-powered companies like ChatGPT monitor the millions of conversations that occur daily with their bots or whether they alert agencies such as the FBI or UK Counter-Terrorism Police of anything suspicious.
While there is no evidence yet that AI bots have swayed a person to terrorism, there have been stories of them causing serious harm. A father of two in Belgium ended his life after talking to a bot named Eliza over the past few weeks about his concerns about climate change. A mayor in Australia has threatened to sue OpenAI, the maker of ChatGPT, after being accused of serving a prison sentence for bribery.
اقرأ أيضا:
The UK Parliament's Science and Technology Committee is currently conducting an inquiry into AI and governance. Its chairman, Conservative MP Greg Clark, said: "We recognize there is a danger here and we need to get governance right. There has been talk of young people being helped to find ways to commit suicide and terrorists being effectively influenced on the internet. "Given the threat it is, it's critical that we maintain the same vigilance with automated, non-human generated content."
Raffaello Pantucci, a counter-terrorism expert from the Royal United Services Institute (RUSI) think tank, said: 'The danger of AI like ChatGPT is that it can increase the potential of individual terrorists, as it can be the perfect tool for someone seeking self-understanding but worried talk to other people.'
On the question of whether an AI company can be held responsible if a terrorist launches an attack after being influenced by bots, Pantucci explained: 'My view is that it's a bit difficult to blame the company, because I don't fully believe they can control the machine themselves.'