After news broke that two Israeli Embassy workers were killed outside a Washington, DC, Jewish museum last week, many Americans took to social media to learn more about what happened and see what their favorite online personalities had to say. While many posts expressed sympathy for the victims, others exemplified trends that experts say are becoming more common: incendiary rhetoric, conspiracy theories, and online legitimization of real-world violence.
Before fatally shooting Sarah Milgrim and Yaron Lischinsky in what authorities are investigating as a hate crime, Elias Rodriguez published an online manifesto that called to “bring the war home.” In the text, he referenced Aaron Bushnell, the US airman who self-immolated in February 2024 in front of the Israeli Embassy to protest Israel’s military campaign in Gaza and described “armed demonstration” as justified and moral.
The sentiments expressed in Rodriguez’s manifesto have been gaining steam online. One of the most notable examples is a video posted by 19-year-old American TikTok influencer Guy Christensen to his more than 3 million followers framing the attack as an “act of resistance.”
That video was later taken down, but a revised version that Christensen posted continued to portray Rodriguez in sympathetic terms. It included language encouraging further action against what Christensen described as “Zionist criminals.”
According to the Anti-Defamation League (ADL), some have taken to the internet to claim that the murders were part of a “false flag” operation, accusing Israel of carrying out the attack to promote sympathy for Jews or to justify war. The ADL noted that a tweet dismissing the attack as a false flag posted by white supremacist influencer Nick Fuentes received 1.8 million views and 39,000 likes.
These reactions have emerged within a broader and increasingly polarized online environment. Since the October 7, 2023, Hamas attack and the subsequent Israeli military operations in Gaza, digital platforms have become spaces for sharply divergent narratives. Some content reflects solidarity with Israel and concern over rising antisemitism; other posts condemn Israel and its actions unequivocally. Heated online discussions can take a turn for the extreme, sometimes going so far as to glorify violence.
Liram Koblentz-Stenzler, head of the antisemitism and global far right desk at Reichman University’s International Institute for Counterterrorism and a visiting scholar at Brandeis University, told The Media Line that online anti-Israel content is generally aligned with either far-left anticolonialism or far-right classical antisemitism.
The radical left often frames Israel as a colonial and apartheid state, while the far right continues to spread classic antisemitic tropes. Both sides are contributing to an increasingly hostile environment online.
“These two ideological streams sometimes intersect on the same platforms, even if they stem from different worldviews,” Koblentz-Stenzler explained. “The radical left often frames Israel as a colonial and apartheid state, while the far right continues to spread classic antisemitic tropes. Both sides are contributing to an increasingly hostile environment online.”
This holiday season, give to:
Truth and understanding
The Media Line's intrepid correspondents are in Israel, Gaza, Lebanon, Syria and Pakistan providing first-person reporting.
They all said they cover it.
We see it.
We report with just one agenda: the truth.


The evolution of online radicalization has accelerated over the last decade, particularly since the widespread adoption of encrypted messaging apps, anonymous forums, and short-form video platforms, Koblentz-Stenzler said.
Ofir Dayan, a researcher at Israel’s Institute for National Security Studies, told The Media Line that many major social media platforms have taken a step back from content moderation.
“Companies like Meta and X have increasingly prioritized open speech over content regulation,” Dayan said. “That shift has created a dynamic where false or misleading information spreads more rapidly and with fewer checks. In such an environment, it becomes difficult to distinguish between legitimate criticism, misinformation, and incitement.”
We should always begin with the presumption in favor of free speech. Punishment or censorship should only come into play when there’s demonstrable empirical evidence of harm—not vague concepts like ‘offense’ or ‘provocation,’ which are politically loaded.
Eric Heinze, a law professor at Queen Mary University of London, warned against heavy-handed suppression of speech. “We should always begin with the presumption in favor of free speech,” he told The Media Line. “Punishment or censorship should only come into play when there’s demonstrable empirical evidence of harm—not vague concepts like ‘offense’ or ‘provocation,’ which are politically loaded.”
He noted that the legal frameworks regarding free speech differ significantly across democracies and complicate any shared response. “There’s a longstanding schism between the United States and Europe when it comes to free speech,” he said. “The US protects even provocative speech unless it’s likely to lead to imminent lawless action, whereas many European democracies allow for restricting ideas purely on grounds of offense or danger.”
“The problem is that we’re no longer in the world of a few dozen public speeches. We’re in a world of millions of posts per hour, often coming from different jurisdictions,” Heinze said. “No legal system can perfectly regulate that volume without either over- or underregulating.”
We’ve documented discussions where participants strategize how to use AI tools and trending audio to reach Gen Z users.
Koblentz-Stenzler noted that extremist content often migrates from fringe sites like Telegram and Gab to mainstream platforms through coordinated networks that exploit virality and emotional manipulation. “We’ve documented discussions where participants strategize how to use AI tools and trending audio to reach Gen Z users,” she said.
But Heinze argued that even troubling speech like that of Guy Christensen should be viewed with caution before jumping to punitive conclusions. “If he had made those statements at a public gathering like Hyde Park Corner, I would defend his right to say it,” Heinze said. “Online, things are more complicated because speech can lead to unpredictable mobilization—but that still doesn’t mean we abandon our free speech principles.”
Dayan emphasized that slogans such as “Globalize the Intifada” or “From the river to the sea” may be interpreted in many ways—some potentially violent—depending on context. Koblentz-Stenzler added that often people using these phrases lack an understanding of their historical significance, saying, “This isn’t necessarily due to malice. Often, it’s a reflection of poor education and shallow digital consumption patterns.”
Heinze similarly challenged the idea that concepts like “glorifying terrorism” are always clear-cut. “If I praise the American or French Revolution, am I glorifying violence? Of course I am. These categories—extremism, terrorism—are often artificial. Once we begin punishing based on vague terms like these, we risk silencing legitimate political speech,” he said.
To address the issues of extremist online rhetoric, Dayan advocates for a multipronged strategy that starts with education. “Workshops in schools can help children learn how to identify misinformation, assess sources, and recognize manipulative content,” she said. Koblentz-Stenzler pointed to Taiwan’s 2-2-2 disinformation response model as an innovative example of fast, clear counter-messaging. (The method involves posting a response to every piece of online misinformation “within two hours, using 200 words and two visual elements.”)
Heinze agreed with the need for a proactive approach. “Instead of trying to draw red lines that will always be arbitrary, the focus should be on digital education,” he said. “Help users recognize biases, understand context, and engage critically. You won’t fix this with takedowns alone.”
Still, Heinze warned that platforms can’t be treated as private actors in the same way as individuals. “It’s laughable to say Twitter or Meta are just like private citizens. These are multinational corporations with more communicative power than many states,” he said. “Treating them as if they’re legally indistinguishable from a guy with a blog is conceptually wrong.”
In his view, no legal regime will perfectly solve this tension. “Every attempt to regulate online speech will either risk lives through underregulation or risk censorship through overregulation,” he said. “There is no formula. The best we can do is be transparent about the trade-offs and accept that perfect balance is not achievable.”
There may be no perfect solution, but the consequences of inaction are high. Dayan noted that the internet users being exposed to extremist rhetoric are “future lawmakers, executives, and community leaders.”
“How they engage with complex issues like the Israeli-Palestinian conflict will shape real-world outcomes,” she said. “It’s not just about narrative; it’s about policy, security, and public trust.”