AI Tested as a Funding Platform for Terrorist Agenda
Israeli research highlights the potential for AI exploitation in advancing terrorist agendas through propaganda, recruitment, fundraising, and cyberattacks
Despite attempts to prevent misuse, AI programs can be abused for terrorist purposes, a new Israeli study found.
The study, titled “Terror: The Risks of Generative AI Exploitation,” found that terrorists could use AI to spread propaganda, recruit followers, raise funds, and even launch cyberattacks more efficiently. Cyberterrorism expert Gabriel Weimann of the University of Haifa, who published the study, described it as “one of the most alarming” pieces of research of his career.
Weimann conducted the study with a team of interns from Reichman University’s International Institution for Counterterrorism (ICT).
It means that they’ll be able to create way more fake news, fake platforms, lies, and deny, in ways that we’re already seeing in this conflict against Hamas
Col. (ret.) Miri Eisin, the managing director of ICT, called the study’s findings “exceedingly disturbing.”
“It means that they’ll be able to create way more fake news, fake platforms, lies, and deny, in ways that we’re already seeing in this conflict against Hamas,” she said.
The researchers used different methods to bypass the AI programs’ counterterrorism measures. In the end, they successfully got around the safety measures about half the time.
In one concerning example, the researchers asked an AI platform for help in fundraising for the Islamic State group. The platform provided detailed instructions on how to conduct the campaign, including what to say on social media.
After managing to circumvent a given AI program’s firewall, Weimann wrote reports to the company behind the program to inform it. But many of the companies didn’t respond.
The study also found that emotionally charged prompts were the most effective in bypassing safety measures, resulting in a success rate of 87%.
If somebody is personal about something, if somebody is emotional about something, it manages to not be monitored in the same way
“If somebody is personal about something, if somebody is emotional about something, it manages to not be monitored in the same way, and it allows a lot more content, which can be completely negative, horrible, contact to get through the monitoring capabilities,” Eisin explained.
While previous research has shown how AI can help fight terrorism, this study reveals a need for better monitoring of AI platforms to prevent misuse by extremists. Different sectors will have to cooperate to develop better safeguards.
Weimann said that since AI companies are motivated by the bottom line, regulators will have to protect the public from the potential abuse of AI by terrorist organizations.
“Now we have the evidence that some of those online platforms are used very well by those groups,” Weimann said.
Lana Ikelan is a recent graduate of the Hebrew University of Jerusalem and an intern in The Media Line’s Press and Policy Student Program.