After news broke that two Israeli Embassy workers were killed outside a Washington, DC, Jewish museum last week, many Americans took to social media to learn more about what happened and see what their favorite online personalities had to say. While many posts expressed sympathy for the victims, others exemplified trends that experts say are becoming more common: incendiary rhetoric, conspiracy theories, and online legitimization of real-world violence.
Before fatally shooting Sarah Milgrim and Yaron Lischinsky in what authorities are investigating as a hate crime, Elias Rodriguez published an online manifesto that called to “bring the war home.” In the text, he referenced Aaron Bushnell, the US airman who self-immolated in February 2024 in front of the Israeli Embassy to protest Israel’s military campaign in Gaza, and described “armed demonstration” as justified and moral.
The sentiments expressed in Rodriguez’s manifesto have been gaining steam online. One of the most notable examples is a video posted by 19-year-old American TikTok influencer Guy Christensen to his more than 3 million followers framing the attack as an “act of resistance.”
That video was later taken down, but a revised version that Christensen posted continued to portray Rodriguez in sympathetic terms. It included language encouraging further action against what Christensen described as “Zionist criminals.”
According to the Anti-Defamation League (ADL), some have taken to the internet to claim that the murders were part of a “false flag” operation, accusing Israel of carrying out the attack to promote sympathy for Jews or to justify war. The ADL noted that a tweet dismissing the attack as a false flag posted by white supremacist influencer Nick Fuentes received 1.8 million views and 39,000 likes.
These reactions have emerged within a broader and increasingly polarized online environment. Since the October 7, 2023, Hamas attack and the subsequent Israeli military operations in Gaza, digital platforms have become spaces for sharply divergent narratives. Some content reflects solidarity with Israel and concern over rising antisemitism; other posts condemn Israel and its actions unequivocally. Heated online discussions can take a turn for the extreme, sometimes going so far as to glorify violence.
Liram Koblentz-Stenzler, head of the antisemitism and global far right desk at Reichman University’s International Institute for Counterterrorism and a visiting scholar at Brandeis university, told The Media Line that online anti-Israel content is generally aligned with either far-left anticolonialism or far-right classical antisemitism.
The radical left often frames Israel as a colonial and apartheid state, while the far right continues to spread classic antisemitic tropes. Both sides are contributing to an increasingly hostile environment online.
“These two ideological streams sometimes intersect on the same platforms, even if they stem from different worldviews,” Koblentz-Stenzler explained. “The radical left often frames Israel as a colonial and apartheid state, while the far right continues to spread classic antisemitic tropes. Both sides are contributing to an increasingly hostile environment online.”
The evolution of online radicalization has accelerated over the last decade, particularly since the widespread adoption of encrypted messaging apps, anonymous forums, and short-form video platforms, Koblentz-Stenzler said. She noted that the decentralization of content creation, combined with low barriers to access and virality, has allowed ideological messages to be disseminated without being subject to editorial oversight or regulatory frameworks.
Much of the extremist content that ends up being seen by broad audiences on mainstream platforms like X, TikTok, and Instagram was originally created and shared on fringe platforms like Telegram and Gab or on various anonymous forums, Koblentz-Stenzler said.
This holiday season, give to:
Truth and understanding
The Media Line's intrepid correspondents are in Israel, Gaza, Lebanon, Syria and Pakistan providing first-person reporting.
They all said they cover it.
We see it.
We report with just one agenda: the truth.


“These ecosystems are not random. They are often coordinated and deliberate,” she explained. “We’ve documented discussions in which participants strategize how to use AI tools, generate emotionally charged content, and manipulate trends to reach younger users, particularly those in the Gen Z demographic.”
Ofir Dayan, a researcher at Israel’s Institute for National Security Studies, told The Media Line that many major social media platforms have taken a step back from content moderation.
“Companies like Meta and X have increasingly prioritized open speech over content regulation,” Dayan said. “That shift has created a dynamic where false or misleading information spreads more rapidly and with fewer checks. In such an environment, it becomes difficult to distinguish between legitimate criticism, misinformation, and incitement.”
There is a tendency to frame everything under the umbrella of free speech. But when rhetoric shifts toward violence or justification of violence, it requires a different level of scrutiny, especially when these narratives circulate at scale.
She noted the importance of distinguishing between legitimate and illegitimate speech. “There is a tendency to frame everything under the umbrella of free speech. But when rhetoric shifts toward violence or justification of violence, it requires a different level of scrutiny, especially when these narratives circulate at scale,” she said.
Legitimate political speech and illegitimate hate speech can be difficult to distinguish, with critiques of Israeli policy sometimes blurring into hateful speech regarding Israelis or Jews. Koblentz-Stenzler noted that this “distinction is often lost in the speed and emotional tone of social media.”
The increase in extremist online rhetoric is paralleled by a similar increase offline. Dayan, who previously led a student group at Columbia University, noted that phrases like “Globalize the Intifada” have become more common across many campuses. “What we sometimes miss is that such slogans, when repeated over time without context or clarification, can be interpreted in very different ways, including violently in real life,” she said.
Koblentz-Stenzler similarly noted that many people using slogans like “from the river to the sea, Palestine will be free” don’t necessarily understand the historical or geopolitical implications of the phrase. (The ADL describes that slogan as “an antisemitic charge denying the Jewish right to self-determination, including through the removal of Jews from their ancestral homeland.”)
“This isn’t necessarily due to malice,” Koblentz-Stenzler said. “Often, it’s a reflection of poor education and shallow digital consumption patterns.”
Paradoxically, a significant portion of extremist anti-Israel speech online is coming from activists and influencers based in the West, many of whom do not have direct ties to the region. Dayan noted that their rhetoric stands in contrast to many voices from the Middle East who have come to adopt more measured positions.
“In many cases, you’ll find activists from the broader region calling for de-escalation or criticizing Hamas, while some Western voices, sometimes with limited understanding of the region’s history, are among the most forceful advocates of confrontation,” she said.
One prominent moderate voice from the Middle East is Ahmed Fouad Alkhatib, a US-based Palestinian analyst originally from Gaza. Alkhatib has publicly criticized both Hamas and the Israeli military operations in Gaza while advocating for long-term solutions grounded in coexistence and institution-building.
Once users engage with a particular narrative, the platform tends to reinforce that exposure through algorithms. You may start with a video about Palestinian civilian casualties, and within hours, you’re being shown content that fully justifies political violence.
Because of the way social media platforms are often structured, many users are never shown content like Alkhatib’s and are instead kept in ideological echo chambers. “Once users engage with a particular narrative, the platform tends to reinforce that exposure through algorithms,” Koblentz-Stenzler said. “You may start with a video about Palestinian civilian casualties, and within hours, you’re being shown content that fully justifies political violence.”
She noted that many of those carrying out extremist attacks are doing so due to online radicalization of that sort rather than membership in any formal terrorist group.
“Many cases we’ve studied involve individuals who were radicalized online over time. They may start with social grievances or identity politics, and gradually adopt more extreme views. The internet offers a space to reinforce and amplify those beliefs,” she said.
To address these dynamics, Dayan advocates for a multipronged strategy that begins with digital literacy education—“Workshops in schools can help children learn how to identify misinformation, assess sources, and recognize manipulative content”—and extends into broader policy reform.
She cited Taiwan’s fight against Chinese social media influence as a model worth examining. “They have a rapid-response strategy for disinformation—known as the 2-2-2 rule—where they respond to any false claim within two hours, using 200 words and two visual elements,” she explained. “It’s a method designed for the speed and visual language of social media.”
Koblentz-Stenzler similarly said that a broader regulatory conversation about online speech is overdue. “Right now, there is no global standard defining hate speech or incitement online,” she said. “Different countries enforce different rules. In the US, the legal threshold is high due to First Amendment protections. In Germany and other European countries, regulation is stricter. This inconsistency allows content to migrate to platforms or jurisdictions with looser controls.”
While the legal and cultural framework in the United States is robust in protecting civil liberties, it poses unique challenges when it comes to regulating harmful content. Without a legal mechanism to address violent rhetoric, responsibility often falls to the platforms themselves, many of which are hesitant to act unless compelled by public pressure or litigation.
Koblentz-Stenzler warned that without coordinated international efforts, reactive measures of that sort will likely be insufficient. “As long as extremist voices can operate across platforms without consequences, and as long as violent rhetoric is misinterpreted as satire or simply free expression, the risk remains high of further offline violence,” she said.
For Dayan, the stakes of combating online incitement are clear. “The individuals being influenced by this environment today are future lawmakers, executives, and community leaders,” she said. “How they engage with complex issues like the Israeli-Palestinian conflict will shape real-world outcomes. It’s not just about narrative; it’s about policy, security, and public trust.”