Facebook and Instagram discontinue use of fact-checkers

0
61

Meta, the parent company of Facebook and Instagram, has made a significant shift in its approach to moderating content on its platforms.

In a move to replace third-party fact-checkers, Meta is introducing a new system called “community notes,” where users will now be responsible for evaluating the accuracy of posts, in a manner similar to the “community notes” feature implemented by X (formerly Twitter) after its acquisition by Elon Musk.

The change comes amid growing criticism from right-wing voices, particularly in the United States, who have accused Meta’s fact-checking policies of political bias.

In a video shared with a blog post on January 7, 2025, Meta CEO Mark Zuckerberg stated that independent fact-checkers had become “too politically biased” and it was time to “get back to our roots around free expression.”

The decision to replace independent fact-checkers is seen by some as an effort to improve relations with the incoming Trump administration. Donald Trump, alongside his Republican allies, has been vocal about the perceived censorship of right-wing viewpoints by social media platforms.

The policy change comes just days before Trump’s inauguration as US President, with many speculating that Zuckerberg’s actions are an attempt to mend ties with the Republican Party.

Trump responded positively to Zuckerberg’s announcement, telling reporters that Meta had “come a long way” and suggesting that the decision was likely a response to past threats made by Trump to Zuckerberg.

Meta’s new “community notes” system will allow users to add context or clarifications to controversial or misleading posts, effectively shifting the responsibility for fact-checking to the community. This model, which mirrors the system adopted by X, has sparked both support and concern.

Zuckerberg praised the community notes system, calling it a step toward encouraging more free speech, though critics warn that it may lead to more misinformation and harm.

Ava Lee, from the advocacy group Global Witness, voiced concern that Zuckerberg’s decision was politically motivated. She argued that the change would exacerbate the spread of disinformation and hate speech online, undermining efforts to hold tech companies accountable for the content they host.

The move is being seen as a reflection of the growing influence of political considerations on social media governance. Meta’s new approach, which will begin in the United States, is expected to have far-reaching implications for how content is moderated on the platform.

However, Meta has stated it has “no immediate plans” to implement the community notes system in the UK or the EU, where stricter regulations around content moderation exist.

Meta’s fact-checking program, introduced in 2016, was designed to address the spread of false or misleading information by referring flagged posts to independent organizations for review.

If a post was deemed inaccurate, it could be labeled as false, and its visibility in users’ feeds would be reduced. The new system will eliminate this third-party evaluation process, replacing it with a more decentralized approach.

This shift has drawn mixed reactions from various stakeholders, with some defending the original fact-checking program, while others argue that it resulted in censorship.

Joel Kaplan, Meta’s president of global affairs and a former Republican strategist, wrote in a statement that the use of independent moderators was “well-intentioned” but ultimately led to censorship.

Kaplan’s appointment to Meta’s top global role is seen by many as a sign of the company’s pivot toward more conservative policies on content moderation.

While Meta frames this policy change as a return to free speech, critics have pointed out the risks involved. By reducing the company’s responsibility for moderating harmful content, Meta risks allowing more misinformation, hate speech, and extremism to thrive on its platforms.

Furthermore, the change contradicts recent regulatory efforts in Europe and the UK, where governments are pushing for stricter rules to hold tech giants accountable for the content they allow.

Zuckerberg acknowledged the risks, saying in his video that the changes would mean “catching less bad stuff” but also reducing the number of innocent posts and accounts mistakenly removed.

This shift, though, has not gone unnoticed by fact-checking organizations. Full Fact, an independent fact-checking organization that works with Meta in Europe, expressed disappointment over the move, describing it as “a backward step that risks a chilling effect on free expression.”

As Meta moves away from its traditional fact-checking model, the broader question of who should be responsible for moderating content on social media platforms remains a critical issue. Technology companies are increasingly caught in the crossfire between political pressure, public scrutiny, and the desire to maximize user engagement.

Zuckerberg’s announcement comes amid broader debates about the role of tech companies in regulating speech. As new political priorities emerge, Meta’s shift to community-driven moderation marks a radical change in its approach.

It is yet to be seen whether this move will lead to an increase in misinformation and division or foster a more open, unregulated environment.