Why Meta Discontinued its Fact-Checking Program on Facebook, Instagram, and Threads


The Black Mirror episode “Nosedive” paints a chilling picture of a world where every post, like, and interaction shapes a user’s social credit score. That score can determine their job, their housing, and even their flight. One bad post, one viral misstep, and their life spirals out of control. While fictional, it highlights a harsh reality: we live in an age where a single piece of misinformation can distort public opinion and social narratives in minutes, wreaking havoc.

The reason it’s relevant now is because the dynamics of information governance are about to change dramatically.

You see, Meta, the parent company of Facebook, Instagram, and Threads, is ending its third-party fact-checking program in the US. Instead, it’s introducing ‘community-driven moderation’. That sounds great, but instead of Meta fact-checking social media posts, it means users can flag any post as true or false and add context to it. Imagine a stadium of fans shouting their own rules instead of professional referees. That’s the gist of Meta’s new strategy.

So why this change, you may ask? According to Mark Zuckerberg, co-founder and CEO of Meta, it’s about reducing claims of bias against centralized fact-checking systems in the US and restoring freedom of expression.

Now, that sounds democratic, since everyone has a say. But crowds can be unpredictable, right? Because while they can get some things right, they can also get some things very, very wrong.

And so this change has sparked a heated debate.

Meta is no ordinary tech company. It’s a digital giant serving more than 3 billion users a day, meaning 40% of the world’s population visits these platforms for news, entertainment, and connection.

While Zuckerberg is banking on collective intelligence to outperform, the risks are glaring.

For example, what happens if a group of people decide to take over the system? They can flood posts with false “facts” and influence public opinion. Then there’s the issue of accountability. If a false claim spreads and causes harm, who gets the blame? The platform or the community? No one? You also need to consider that misinformation often spreads quickly. So by the time a note is added, the damage is already done. Finally, let’s face it, most of us are not experts or equipped to fact-check complex topics like climate change or vaccine science, right?

So yeah, it’s all very complicated.

After all, one thing we can learn from X’s Community Notes is that it’s already been published and is similar to what Meta plans to publish. When the platform announced it, it received mixed reactions. Sometimes it worked great. People flagged the false claims, and post readers got the real context. But other times, it fell flat. The notes were confusing or missed the bigger picture.

So replicating such a model on Meta’s vast platform, with its diverse user base and heavy reliance on ad revenue, could be a whole different beast.

Which brings us to Meta’s money-making machine: advertising, or ad revenue. Ads account for nearly 98% of its revenue. That means billions of dollars are being spent on brands that are confident about where their ads appear on Meta. If community-driven moderation leads to an increase in harmful or misleading content, advertisers may be spooked and pull back. This could hurt Meta’s revenue and shake its business model.

So why is Meta taking the risk, you ask? Many say it could be politics. Meta has had a checkered history with US leaders, especially after it banned Donald Trump from its platforms following the Capitol riots. But with Trump returning to power in 2024 and moving toward “unfiltered” social media, perhaps Meta is trying to make nice.

But this isn’t just about the US.

The problem is that fact-checking groups rely on Meta for funding. If that funding dries up, some organizations could close up shop altogether.

And it’s not just about money. These groups rely on Meta’s platforms to get their work seen. Facebook and Instagram drive huge traffic to their websites, helping their fact-checking reach millions.

So, is Meta’s move a bold experiment in democratization or a risky gamble? Without enough data and facts, it may be too early to tell. But what is given is that advertisers will likely continue to spend on ads on Meta’s platforms as long as Meta is delivering results and returns on their investments. If so, Meta’s business should be fine.

For the rest of us, the bigger question at hand is: How do we provide accurate information? Perhaps the answer lies in a mix of community input and expert review. Or maybe it’s time for decentralized systems that guarantee transparency.

What we know for now is that Meta’s move is reshaping the narrative of tomorrow.

How do you rate this article?

45

Send a $0.01 microtip in crypto to the author, and earn yourself as you read!

20% to author / 80% to me.
We pay the tips from our rewards pool.