TikTok’s algorithms are really good at finding videos to keep people glued to their phone screens for hours on end. What they aren’t great at, a new report shows, is detecting ads that contain blatant misinformation about the US election.
That’s despite TikTok banning all political ads from its platform in 2019.
The report raises new concerns about the wildly popular video-sharing app’s ability to detect election alerts at a time when a growing number of young people are using it not only for entertainment, but also for finding information. The nonprofit Global Witness and the Cybersecurity for Democracy team at New York University released the report Friday.
Global Witness and NYU tested whether some of the most popular social platforms — Facebook, YouTube and TikTok — can detect and remove fake political ads targeting U.S. voters ahead of next month’s midterm elections. The watchdog group has conducted similar tests in Myanmar, Ethiopia, Kenya and Brazil with ads inciting hate and disinformation, but this is the first time it has done so in the United States.
The US ads contain misinformation about the voting process, such as when or how people can vote, as well as how election results are counted. They also aimed to sow mistrust of the democratic process by spreading unfounded claims that the vote had been “rigged” or decided before Election Day. They were all submitted to the social media platforms for approval, but none were actually published.
TikTok, owned by the Chinese company ByteDance, was the worst performer, letting through 90% of the ads submitted by the group. Facebook outperformed, picking up seven out of twenty false advertisements, in both English and Spanish.
Jon Lloyd, senior advisor at Global Witness, said TikTok’s results were “a huge surprise to us” in particular, as the platform completely bans political ads.
In a statement, TikTok said it has banned and banned misinformation about elections and paid political advertising from its platform.
“We value feedback from NGOs, academics and other experts that helps us continuously strengthen our processes and policies,” the company said.
Facebook’s systems have detected and removed most of the ads that Global Witness submitted for approval.
“These reports were based on a very small sample of ads and are not representative given the number of political ads we review every day around the world,” Facebook said. after an ad goes live.” It added that it is investing “significant resources” to protect elections.
YouTube, meanwhile, has detected and removed all problematic ads, even suspending the test account Global Witness set up to post the fake ads in question. At the same time, however, Alphabet’s video platform did not detect any of the false or misleading election ads the group had submitted for approval in Brazil.
“So that shows there’s a real global discrepancy in their ability to enforce their own policies,” Lloyd said.
Google said it has “developed comprehensive measures to address misinformation” on its platforms, including false claims about elections and voting.
“In 2021, we blocked or removed more than 3.4 billion ads for violating our policies, including 38 million for violating our misrepresentation policy,” the company said in a prepared statement. “We know the importance of protecting our users from this type of abuse – especially in the run-up to major elections such as those in the United States and Brazil – and we continue to invest in and improve our enforcement systems to better detect this content and delete it.”
Lloyd said the consequences of failing to control disinformation would be widespread.
“The consequences of inaction could be catastrophic for our democracies and our planet and society in general,” Lloyd said. “Increasing polarization and all that. I don’t know what it takes to take it seriously.”