Big Tech publishers, including Facebook, Twitter, and Google, use seven major censorship tactics to control the flow of information through their products. Under the beneficent guise of “content moderation” most censorship by Big Tech publishers was traditionally directed at vices such as obscenity, violence, drugs, and gambling, often in direct response to advertisers who don’t want their ads to appear adjacent to such content. But increasingly, these publishers are applying censorship tactics to more slippery categories such as disinformation, bullying, and hate speech, where definitions are elusive and judgments are subjective and prone to bias. As the public increasingly relies on these publishers for news and information, it is vital to understand these censorship tactics and the potential risks they pose to free speech. Here are the 7 D’s of Big Tech censorship:
Direct Censorship
The most obvious censorship tactic is direct censorship, which is the blocking and removing of information. For a social media giant like Twitter, this means users can’t access specific information or share it with their network. A recent example of direct censorship was Twitter’s decision to block a New York Post news story about foreign influence in US politics, involving Hunter Biden. To justify this direct censorship, Twitter cited its policy against publishing hacked materials, without any evidence the New York Post used hacked materials in its story. Massive public blowback against Twitter’s direct censorship of a news story by an established big-city newspaper (founded in 1801) eventually forced the social media giant to announce changes to its official publishing policies, but free speech advocates remain wary.
Deplatforming
Publishers regularly block the accounts of individuals and organizations through a practice known as deplatforming. In June, the BBC reported that Facebook had removed the account of the British ska band The Specials and its lead singer Neville Staple. Staple is a person of color who was incorrectly identified as a white supremist in Facebook’s evolving enforcement of restrictions on hate speech. Facebook eventually reversed itself in the case of The Specials, but the incident highlights the fact that Big Tech publishers have the power to enforce cancel-culture rules that are often subjective and prone to error. Facebook reported that it blocked or removed 1.3 billion accounts in Q3 2020 due to violations of its policies, and while the bulk of these were fake accounts, some were undoubtedly legitimate. The deplatforming trend is particularly problematic for journalists and news organizations who have their accounts blocked. Following publication of the Hunter Biden story, Twitter completely locked the New York Post’s account, preventing it from posting additional news stories for over 24 hours.
Delegitimizing
Publishers have begun flagging content with labels intended to notify users of concerns about the legitimacy of specific posts. In February, Twitter announced a new policy for flagging photos and videos that appear to be manipulated or altered in a way that is “likely to impact public safety or cause serious harm.” Twitter soon expanded their labeling policy to flag a host of new topics, ranging from COVID-19 to election politics. Facebook has also deployed warning labels on specific categories of content, including advertising, and in June began offering users the ability to turn off political, electoral, and social issue ads. Delegitimizing content with flags, labels, and categories has been presented by publishers as a “compromise” between direct censorship and a laissez-faire approach of non-interference. Nonetheless, due to its subjective application, the practice of applying labels that delegitimize content has the potential for abuse.
Deamplification
Publishers have the power to amplify a news story and push it into the feeds of millions of users, often dwarfing the distribution the story would get through subscriptions and circulation. Normally, this amplification happens as the result of an unbiased algorithm that simply pushes content because it is trending in clicks, or because it mirrors the type of content users have consumed in the past. Deamplification happens when the publisher actively prevents their algorithms from pushing out specific content. Publishers can throttle their amplification engines to depress the visibility of a story. In November, Facebook acknowledged using a “news ecosystem quality” score or N.E.Q., which explicitly amplifies content from favored publishers including The New York Times, CNN, and NPR, with the equal and opposite effect of deamplifying content from smaller publishers and independent journalists. Forbes reported that Facebook took this action as a temporary response to post-election misinformation, but Glenn Greenwald argues that pressure from The New York Times in favor of Facebook’s N.E.Q. censorship is an obviously self-serving strategy for stifling competition, and represents a real threat to independent journalism.