After a gunman livestreamed his horrific massacre of 51 people in a New Zealand mosque, governments around the world demanded Big Tech do more to prevent such content from spreading online. The killings provoked many calls and actions of censorship.
Business Insider reported that a UK Labour Party leader said YouTube should stop ALL uploads if it couldn’t stop the spread of the video. Australia’s prime minister called for the shutdown of livestreaming, and Australia's largest telecommunications company blocked access and usage of legitimate websites because they hosted footage of the attack.
Soon after the attacks, the Christchurch Call for Action was established. The voluntary agreement to “eliminate terrorist and violent extremist content online” was led by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron. Eighteen governments and eight tech companies signed on. Thirty-one more countries joined later.
The U.S. did not. The White House issued a statement supporting “the overarching message,” but did not endorse or sign the non-binding agreement because of constitutional qualms. The Washington Post reported: “White House officials said free-speech concerns prevented them from formally signing onto the largest campaign to date targeting extremism online.”
Although many on the left and in the media criticized the Trump administration’s decision, Atlantic staff writer Graeme Wood offered an excellent defense of the decision.
“The United States shares many values with France and New Zealand, but a common understanding of ‘freedom of expression’ is not among them,” Wood wrote. “And in their handling of terror threats, both countries have resorted to actions that are antithetical to the American understanding of the value of freedom of expression and would be flagrantly illegal under U.S. law.” He highlighted ways New Zealand and France restrict, define or punish speech that are incompatible with American values and laws.
- In September 2019, several months after Big Tech signed on, Microsoft joined Ardern and Macron before the U.N. General Assembly to share steps being taken to prevent the spread of violent extremism online. The major action was that the Global Internet Forum to Counter Terrorism formed by Microsoft, YouTube, Facebook, and Twitter was being spun out to become its own organization.
- Liberal/libertarian Electronic Frontier Foundation praised some elements of the Christchurch Call, but complained about several elements, including upload filters (EFF said they are “inconsistent with fundamental freedoms”) and how “terrorism” and “violent extremism” would be defined and by whom.
- Tauhid Zaman, a Muslim-American academic, also expressed concerns of censorship creep — that rightly banning terrorist content could lead to banning anything “offensive” or controversial. He pointed to Facebook’s decision to start banning any content glorifying white nationalism and separatism as an example. “Do we really want huge social-media sites defining what is and what isn’t white nationalism and separatism — what is and what isn’t glorification of such causes? More importantly, do we really want social-media companies defining what’s offensive and controversial speech in general?”
- Facebook instituted a “one-strike” policy in May 2019 against policy violators and said it would take away Facebook Live abilities from them for set periods of time. The same month, it also began banning “dangerous individuals” including Milo Yiannopoulos, Alex Jones and Nation of Islam leader Louis Farrakhan.
- Amazon, Facebook, Google, Microsoft and Twitter all signed on to the Christchurch Call and issued a joint plan listing nine steps they would take. All promised to update codes of conduct, terms of service and application of policies. The companies also said they would “attack the root causes of extremism and hate online” through supporting research and NGOs challenging hate.
- Because the Christchurch Call commits to developing “interventions” and algorithmic ways to “redirect users from terrorist and violent extremist content,” the American Enterprise Institute called it “social engineering.”
In the days after the attack, Microsoft President Brad Smith issued a call well beyond just stopping videos of the massacre. He urged various efforts including browser-based solutions for blocking access to violent content and “foster[ing] a healthier online environment.” “Working on digital civility has been a passion for many employees at Microsoft, who have recognized that the online world inevitably reflects the best and worst of what people learn offline,” he bragged.