Who will Regulate Social Media?
Al-Arab, London, March 29
Immediately after any terror attack in the Western world, the old debate about the role of Internet companies and social media platforms in inciting violence resurfaces. In the wake of the tragic attack on the mosque in Christchurch, New Zealand, on Friday, March 15 – which resulted in the death of 50 Muslim worshipers – this controversy again broke out when Australia officially called for tightening control on social media platforms during the recent G20 Summit in Osaka, Japan. Australian Prime Minister Scott Morrison said bluntly on his Twitter account: “G20 leaders must assure that there are consequences not only for the perpetrators of these heinous attacks, but also for those who spread messages of violence.” Yet this begs a bigger question: Who is responsible for Facebook, YouTube and other platforms used by terrorists? Shortly after the Christchurch attack, which was aired by the perpetrator in real-time on Facebook, the New Zealand police and government sent urgent requests to Facebook to delete the content, but Facebook took hours to respond. Have we now gotten to a point where these companies are simply unafraid to protect criminals while refusing to cooperate with law enforcement agencies? The leaders of Apple, Google, Twitter and Facebook claim that they are the embodiment of a modern, liberal ideal of free and unfiltered exchange of ideas, knowledge and information. But does this freedom of information imply the spread of a 17-minute-long video of a terrorist shooting innocent worshippers inside a mosque? These platforms claim that they interact and delete all offensive content, but this is not true. These companies do not care about cleaning up their content, especially content supporting and promoting terrorism and extremism. They must be held accountable for the content they help spread or be banned from operating in our countries. – Mashry al-Zayidi