New Artificial Intelligence Tool to Combat Implicit Anti-Semitism Online (with VIDEO)
Berlin-based Alfred Landecker Foundation officially launches Decoding Anti-Semitism project with team of international researchers
A new artificial intelligence-powered tool aimed at combating implicit forms of anti-Semitism online is being developed by a team of international researchers.
The Berlin-based Alfred Landecker Foundation on Monday announced the official launch of the “Decoding Anti-Semitism” initiative, which will rely on an AI-driven approach to detect more nuanced forms of anti-Semitism in English, French and German.
What’s new with this project is that it looks specifically at implicit forms of anti-Semitism because language is one of the most complex structures there is
The foundation, which is backing the project with roughly $3.5 million in funding, has joined forces with the Center for Research on Anti-Semitism at Berlin’s Technical University and King’s College London. A team of discourse analysts, computational linguists and historians are working together to develop the complex new technology, which initially will be focused on Germany, France and the United Kingdom.
“What’s new with this project is that it looks specifically at implicit forms of anti-Semitism because language is one of the most complex structures there is,” Dr. Andreas Eberhardt, CEO of the Alfred Landecker Foundation, told The Media Line.
“For example in Germany, given the history, it’s taboo to say certain things and people know that,” he added. “In order to circumvent that [taboo] they use codes and memes to convey their message but because it’s implicit it’s very hard to detect. This is really the challenge: to feed the machines with that.”
According to the foundation, roughly 70-80% of all anti-Semitism in the German-speaking world is not overt but rather expressed through coded language. Only an AI-based tool is capable of monitoring the massive amounts of content that is posted on online forums and social media on a daily basis.
“You need machines but at the end of the chain, you need humans to decide what to cancel and what not to cancel because machines cannot make this final decision,” Eberhardt emphasized.
Among the researchers working to develop the computing solution is Dr. Daniel Allington, a senior lecturer in Social and Cultural Artificial Intelligence at King’s College London. Allington is also one of the world’s leading experts in analyzing the linguistic phenomena surrounding contemporary anti-Semitism.
In order to ensure that the AI is functioning correctly, Allington told The Media Line that he expects to be spending the upcoming three years developing and testing the new technology.
“It will look for words and patterns of words, which make it more likely that a text is anti-Semitic,” he explained. “It’s not just a matter of recognizing bad words; it’s a matter of recognizing the kinds of ways that people tend to talk when they are expressing hate.”
In recent years, several companies have developed AI-backed algorithms to identify and prevent hate speech, among them Google and Facebook. The latter recently implemented an automated system that removes hate speech before it is posted and before users even have to flag said content. According to Facebook, the tool has successfully managed to detect 88.8% of hate speech – or 9.6 million pieces of content in the first quarter of 2020 alone.
But the machine learning-based method being developed by Allington and his fellow researchers in Germany is different.
“Although work done by other research groups suggests that it is possible to use AI to identify hate speech, nobody has yet really tested it with these implicit forms of hate speech that we know are so important online,” he said. “We’re not just taking the easy cases; we’re going to be focusing on examples of hate speech that are really difficult to identify as such. Nobody’s done this before.”
I don’t want to see legitimate speech being censored. Equally, I don’t want to see people being able to get away with hate speech by using the sort of coded language that we often see online. It’s a question of developing tools that are as effective and as nuanced as possible in the way that they work
Allington foresees that eventually the “Decoding Anti-Semitism” tool could be used in a wide variety of social media platforms, which are often rife with hate speech.
While he acknowledged that there is a risk the tool could lead to censorship or mistakenly flag harmless content, he is hoping to avoid those pitfalls by testing the limits of what a computer can and cannot do.
“I don’t want to see legitimate speech being censored,” Allington stressed. “Equally, I don’t want to see people being able to get away with hate speech by using the sort of coded language that we often see online. It’s a question of developing tools that are as effective and as nuanced as possible in the way that they work.”
Those responsible for disseminating hate are constantly evolving their language and methods to spread their views in order to avoid detection. For this reason, identifying what indeed constitutes anti-Semitic speech is an exceedingly difficult task for any computer.
“It’s never going to be possible for us to hand over the job of content moderation to an AI,” Allington admitted. “The job of the AI will always be flagging content that may need to be removed. It’s going to be down to human moderators to make that decision of whether or not to remove it.”