Regulators in the EU sent letters to Meta, TikTok, and X/Twitter Thursday, giving the platforms 24 hours to address misinformation and other “illegal” content related to the Israel-Hamas war. It’s hard to say exactly what’s happening on these platforms, however, especially when it comes to the algorithmically defined endless scroll of TikTok. Standing in stark contrast to the EU’s complaints is a new study that comes to an entirely different conclusion: there’s barely any news content on TikTok to begin with, and the algorithm avoids the scant news it has to offer.
Almost immediately after the Hamas attacks in Israel over the weekend, X, at the very least, was objectively flooded with lies about the war. The platform’s army of blue-checked users posted videos of old conflicts, footage from other parts of the world, and even clips from video games masquerading as first-hand coverage from the streets of Gaza—all of it promoted thanks to the algorithmic boost that comes with a verified account. European regulators were swift to respond. EU Commissioner Thierry Breton sent Elon Musk a letter, accusing the company of violating the Digital Services Act and demanding action within a day’s time.
A day later, Breton sent similar letters to Meta and TikTok. “TikTok has a particular obligation to protect children and teenagers from violent content and terrorist propaganda—as well as death challenges & potentially life-threatening content,” Breton said in a social media post about the letter.
TikTok acknowledges a spark in dangerous or at the least unhelpful content on its platform, and it has beefed up resources to combat violent, hateful, or misleading content, a spokesperson said. The company claims it has its own army of 40,000 “safety professionals” who review content on TikTok, and it partners with the International Fact Checking Network to combat false claims.
Misinformation flourishes on social media, and TikTok is no exception, but the nature and extent of the problem are hard to measure. A recent example is the daylight between Breton’s allegations and a new study published in the journal New Media & Society. The study found TikTok’s algorithm is surprisingly hesitant to serve news content. “We find almost no evidence of proactive news exposure on TikTok’s behalf,” the researchers said.
Among other methods, the study documented the accounts TikTok recommends to new users. The researchers also rolled up a team of 60 TikTok-scrolling robots that are trained to watch or skip videos based on whether their content intersected with headlines from the New York Times. The results were staggering. Out of 6,568 videos, only 6 qualified as news under the study’s parameters. And out of 10,000 recommended accounts, only 18% were related to news content.
By comparison, the researchers had the robots demonstrate an interest in football using the same method and found the recommendations were football-related 88% of the time.
There are plenty of possible explanations. The researchers noted it could be that news publishers haven’t embraced the platform, so there isn’t enough news content to saturate the feed. It’s also possible, according to the study, that the algorithm doesn’t recognize “news” as a coherent topic of interest for users, so the robots’ attempts to demonstrate their interest in news could have been doomed from the start.
The discrepancy could also come down to definitions, according to Nicholas Diakopoulos, a professor at Northwestern University and one of the study’s authors. “Our definition of news as measured may differ from how many regular folks may define ‘news,’” Diakopoulos told Gizmodo by email. EU Commissioner Breton did not immediately respond to a request for comment.
For its part, TikTok said its “For You Page,” the app’s primary feed, is tailored to the individual users and treats news the same as any other content, according to a company spokesperson. The spokesperson cited verified news organizations that have hundreds of thousands or even millions of followers, including NPR, the Wall Street Journal, and USA Today.
Still, the study is surprising given the fact that one-third of adult TikTok users in a recent Pew Research Center survey said they regularly get their news on TikTok. And surveys aside, TikTok has a reputation for misinformation on a variety of subjects, news included.
The real problem comes down to how difficult it is to get a clear picture of an algorithmically tailored platform with millions or even billions of users. In the past, social media companies offered researchers free access to study their platforms (with certain restrictions). We’ve also learned a lot about Facebook and Twitter from leaks and legal proceedings while data scientists felt around in the dark and made progress on methods to assess those platforms. That’s starting to change. Elon Musk revoked those privileges at X, and TikTok only grants a limited number of approved researchers access to examine its internal data.
The social media giants are pushing back on the EU’s demands. “X is proportionately and effectively assessing and addressing identified fake and manipulated content during this constantly evolving and shifting crisis,” X CEO Linda Yaccarino said in a letter to Breton, posted to X on Thursday.
Meta spokesperson Ben Walters shared a blog post with Gizmodo in response to questions, detailing the company’s response to the Israel-Hamas conflict. Meta said it removed 795,000 pieces of “disturbing” content in the days following the October 7th attacks, for example,
In many respects, regulators are left at the mercy of the tech companies, relying in large part on their self-reported efforts to curb problems like misinformation or coordinated illegal activity, with few reliable methods to verify the industry’s claims. Is there a misinformation problem on TikTok? Absolutely. But if you don’t work at the company, the only way to understand the true contours of the issue is a slow, methodical look back, long after the content has spread and festered among the app’s users.
Trending Products