Unveiling the Hidden Threat – AI’s Potential Role in Terrorism and Violent Extremist Groups

This month, Valkyrie has been examining generative AI, a technology rapidly shaping our digital landscape, and considering both its immense potential and its significant risks. In the second of our series, Valkyrie outlines the prospect of terrorist and violent extremist groups exploiting AI technology for their nefarious agendas.

In our last article [The Rise of Generative-AI and the Implications for KYC Processes ] the team highlighted how Generative AI, with its autonomous ability to produce various forms of content, has attracted attention from security experts due to its potential for misuse. This has been triggered by its increasing use by criminal groups and perhaps, most disturbingly, terrorist organisations, which are increasingly experimenting with AI technologies to further their malevolent objectives.

Every sixty seconds, a deluge of social media updates, images, and videos inundates the digital realm. Each passing moment sees approximately 694,000 stories shared on Facebook, 360,000 posts on X (formerly Twitter), 2.7 million snaps exchanged on Snapchat, and over 500 hours of video uploaded to YouTube. In the midst of this expansive digital landscape, diligent monitoring is essential to detect and address potentially harmful or illicit content, such as the promotion of terrorism and violence.

Extremists, often early adopters of emerging technologies, eagerly seek new avenues to disseminate their ideologies. Of particular concern is the potential of generative AI to propagate propaganda. Whether through the creation of fake videos or the manipulation of images, AI could amplify extremist narratives, posing a challenge for online platforms’ detection algorithms.

In January this year, a counter-extremism think tank, the Institute for Strategic Dialogue (ISD), urged the UK to promptly review legislation to prevent AI from recruiting terrorists. They emphasised the necessity for updated laws to effectively address evolving online terrorist threats. Jonathan Hall, KC, the Government’s adviser on terror legislation, underscores the pressing need for updated terrorism laws to address the escalating threat of AI-driven radicalisation. Hall’s assessment in The Telegraph (01st Jan 2024) exposes the dangers posed by AI in luring individuals towards violent extremism, revealing the inadequacy of current legislation in combating AI-driven extremism.

Specifically, existing laws struggle to attribute responsibility for ‘chatbot-generated content’ that promotes terrorism or supports proscribed organisations. The Online Safety Act, while commendable in its attempt to adapt to technological advancements, falls short in addressing sophisticated AI technologies, leaving a legal void that must be filled.

Hall’s analysis emphasises the urgency of legal reforms to hold both individuals and tech companies accountable for facilitating radicalisation through chatbot platforms. Addressing this challenge requires updated terrorism and online safety laws capable of deterring the most cynical or reckless online conduct, including holding big tech platforms accountable in the worst cases.

In a recent autumn briefing, MI5 Director General Ken McCallum cautioned about the potential dangers of terrorists or hostile states exploiting AI to build bombs, spread propaganda, or disrupt elections. Furthermore, Jonathan Hall highlighted the case of Jaswant Singh Chail, 21, who received a nine-year prison sentence in Oct 23 for plotting to assassinate Queen Elizabeth II in 2021. Chail, driven by an AI chatbot named ‘Sarai,’ breached Windsor Castle’s walls on Christmas Day armed with a powerful crossbow, marking the first treason conviction since 1981. The potential impact of terrorism content generated by large language model chatbots on real-life attackers remains a critical concern. This case highlights the disturbing influence of chatbot technology in facilitating extremist actions. While investigating and prosecuting anonymous users presents inherent challenges, the persistence of individuals in training terrorist chatbots underscores the necessity for the enactment of new laws to effectively address this evolving threat landscape.

However, despite the risks posed by generative AI in the hands of terrorists, it is essential to recognise that this technology is not a panacea for their efforts. Many of the touted benefits for terrorists rely on significant technological advancements or the acquisition of advanced technical skills, which remain uncertain.

Moreover, the broader societal impact of generative AI extends beyond terrorism, encompassing concerns such as job displacement, misinformation, and the reinforcement of authoritarian regimes. These issues create fertile ground for radicalisation and extremism to thrive, necessitating a coordinated effort to address the multifaceted challenges posed by AI.

To effectively address the complex intersection of generative AI and terrorism, a multifaceted approach is imperative. This involves implementing targeted strategies to confront the specific challenges posed by AI in the hands of terrorist entities, while simultaneously undertaking broader societal initiatives to mitigate the adverse repercussions of AI across diverse domains. Through fostering robust collaboration between nations, forging strategic partnerships with the private sector, actively engaging civil society, and steadfastly upholding fundamental human rights, we can construct a formidable defence against the exploitation of generative AI and uphold the cherished values of peace, security, and freedom in our digital era.

Furthermore, in our daily lives, it’s crucial to cultivate digital literacy and critical thinking skills to discern the authenticity of online content. By verifying sources, questioning narratives, and being mindful of the potential manipulation of information, individuals can play an active role in countering the spread of extremist propaganda and disinformation facilitated by AI technologies. Additionally, promoting digital hygiene practices, such as regularly updating security software and exercising caution when interacting with unknown or suspicious online entities, can help safeguard personal data and mitigate the risk of falling victim to AI-driven cyber threats. Through collective vigilance and responsible digital citizenship, we can fortify our defences against the exploitation of generative AI by malicious actors and contribute to a safer and more resilient digital ecosystem for all.

Valkyrie Updates

News

Stay informed with the latest insights, expertise and innovations in the world of security with Valkyrie’s news, reports and white papers