by Christopher Carbone at Daily Mail
- A new investigation reveals that 75% of ads containing death threats to US election workers submitted before the midterms were approved by Facebook
- Researchers at Global Witness and NYU used language from real death threats and found that TikTok and YouTube suspended them, but Facebook did not
- ‘It’s incredibly alarming that Facebook approved ads threatening election workers with violence, lynching and killing,’ Global Witness’ Rosie Sharpe said
- A Facebook spokesperson said the company remains committed to improving its detection systems
Facebook was not able to detect the vast majority of advertisements that explicitly called for violence against or killing of US election workers ahead of the midterms, a new investigation reveals.
The probe tested Facebook, YouTube and TikTok for their ability to flag ads that contained ten real-life examples of death threats issues against election workers – including statements that people would be killed, hanged or executed and that children would be molested.
TikTok and YouTube suspended the accounts that had been set up to submit the ads. But Meta-owned Facebook approved nine out of ten English-language death threats for publication and six out of ten Spanish-language death threats – 75% of the total ads that the group submitted for publication.
‘It’s incredibly alarming that Facebook approved ads threatening election workers with violence, lynching and killing – amidst growing real-life threats against these workers,’ said Rosie Sharpe, investigator at Global Witness, which partnered with New York University Tandon School of Engineering’s Cybersecurity for Democracy (C4D) team on the research.
Continue Reading