Less than two weeks before Donald Trump is reinstated as President, Meta is abandoning its fact-checking program in favor of a crowdsourced model that emphasizes “free expression.” The shift marks a profound change in how the company moderates content on its platforms—and has sparked fierce debate over its implications for misinformation and hate speech online.
[time-brightcove not-tgx=”true”]
Meta, which operates Facebook, Instagram and Threads, had long funded fact-checking efforts to review content. But many Republicans chafed against those policies, arguing that they were disproportionately stifling right-wing thought. Last year, Trump threatened Meta CEO Mark Zuckerberg that he could “spend the rest of his life in prison” if he attempted to interfere with the 2024 election.
Since Trump’s electoral victory, Zuckerberg has tried to mend the relationship by donating $1 million (through Meta) to Trump’s inaugural fund and promoting longtime conservative Joel Kaplan to become Meta’s new global policy chief. This policy change is one of the first major decisions to be made under Kaplan’s leadership, and follows the model of Community Notes championed by Trump ally Elon Musk at X, in which unpaid users, not third-party experts, police content.
Zuckerberg, in a video statement, acknowledged that the policy change might mean that “we’re going to catch less bad stuff.” When asked at a press conference Tuesday if he thought Meta’s change was in response to his previous threats, Trump said, “Probably.”
While conservatives and free-speech activists praised the decision, watchdogs and social media experts warned of its ripple effects on misinformation spread. “This type of wisdom-of-the-crowd approach can be really valuable,” says Valerie Wirtschafter, a fellow at the Brookings Institution. “But doing so without proper testing and viewing its viability around scale is really, really irresponsible. Meta’s already having a hard time dealing with bad content as it is, and it’s going to get even worse.”
Read More: Here’s How the First Fact-Checkers Were Able to Do Their Jobs Before the Internet
Facebook and misinformation
Meta’s checkered history with combating misinformation underscores the challenges ahead. In 2016, the company launched a fact-checking program amid widespread concerns over the platform’s impact on the U.S. elections. Researchers would later uncover that the political analysis company Cambridge Analytica harvested the private data of more than 50 million Facebook users as part of a campaign to support Trump.
As part of its new fact-checking program, Facebook relied on outside organizations like The Associated Press and Snopes to review posts and either remove them or add an annotation. But the company’s efforts still fell short in many ways. In 2017, Amnesty International found that Meta’s algorithms and lack of content moderation “substantially contributed” to helping foment violence in Myanmar against the Rohingya people.
In 2021, a study found that Facebook could have prevented billions of views on pages that shared misinformation related to the 2020 election, but failed to tweak its algorithms. Some of those pages glorified violence in the lead-up to the Jan. 6, 2021 attack on the U.S. Capitol, the study found. (Facebook called the report’s methodology “flawed.”) The day after the Capitol riot, Zuckerberg banned Trump from Facebook, writing that “the risks of allowing the President to continue to use our service during this period are simply too great.”
Read More: Facebook Acted Too Late to Tackle Misinformation on 2020 Election, Report Finds
But as critics clamored for more moderation on Meta platforms, a growing contingent stumped for less. In particular, some Republicans felt that Meta’s fact-checking partners were biased against them. Many were particularly incensed when Facebook, under pressure from Biden Administration officials, cracked down against disputed COVID-19 information, including claims that the virus had man-made origins. Some U.S. intelligence officers subsequently supported the “lab leak” theory, prompting Facebook to reverse the ban. As criticism from both sides grew, Zuckerberg decided to reduce his risk by simply deprioritizing news on Meta platforms.
Pivoting to Community Notes
As Zuckerberg and Meta weathered criticism over their fact-checking tactics, billionaire Tesla CEO Musk bought Twitter in 2022 and took a different approach. Musk disbanded the company’s safety teams and instead championed Community Notes, a system in which users collaboratively add context or corrections to misleading posts. Community Notes, Musk felt, was more populist, less biased, and far cheaper for the company.
Twitter, which Musk quickly renamed X, ended free access to its API, making it harder for researchers to study how Community Notes impacted the spread of hate speech and misinformation on the platform. But several studies have been conducted on the topic, and they have been mixed in their findings. In May, one scientific study found that Community Notes on X were effective in combating misinformation about COVID-19 vaccines and citing high-quality sources when doing so. Conversely, the Center for Countering Digital Hate found in October that the majority of accurate community notes were not shown to all users, allowing the original false claims to spread unchecked. Those misleading posts, which included claims that Democrats were importing illegal voters and that the 2020 election was stolen from Trump, racked up billions of views, the study wrote.
Now, Meta will attempt to replicate a similar system on its own platforms, starting in the U.S. Zuckerberg and Kaplan, in announcing the decision, did little to hide its political valence. Kaplan, previously George W. Bush’s deputy chief of staff, announced the decision on Fox & Friends, and said it would “reset the balance in favor of free expression.” Zuckerberg, who recently visited Trump at Mar-a-Lago, contended in a video statement that “the fact checkers have just been too politically biased, and have destroyed more trust than they’ve created.” He added that restrictions on controversial topics like immigration and gender would be removed.
Meta’s announcement was received positively by Trump. “I thought it was a very good news conference. Honestly, I think they’ve come a long way,” he said on Tuesday about the change. Meta’s decision may also alter the calculus for congressional Republicans who have been pushing to pass legislation cracking down on social media or attempting to re-write Section 230 of the Communications Decency Act, which protects tech platforms from lawsuits for content posted by their users.
Many journalists and misinformation researchers responded with dismay. “Facebook and Instagram users are about to see a lot more dangerous misinformation in their feeds,” Public Citizen wrote on X. The tech journalist Kara Swisher wrote that Zuckerberg’s scapegoating of fact-checkers was misplaced: “Toxic floods of lies on social media platforms like Facebook have destroyed trust, not fact checkers,” she wrote on Bluesky.
Wirtschafter, at the Brookings Institution, says that Meta’s pivot toward Community Notes isn’t necessarily dangerous on its own. She wrote a paper in 2023 with Sharanya Majumder which found that although X’s Community Notes faced challenges in reaching consensus around political content, the program’s quality improved as the company tinkered with it—and as its contributor base expanded. “It’s a very nuanced program with a lot of refinement over years,” she says.
Meta, in contrast, seems to be rolling out the program with far less preparation, Wirtschafter says. Adding to the Meta’s challenge will be creating systems that are fine-tuned to each of Meta’s platforms: Facebook, Instagram, and Threads are all distinct in their content and userbases. “Meta already has a spam problem and an AI-generated content problem,” Wirtschafter says. “Content moderation is good for business in some sense: It helps clear some of that muck that Meta is already having a hard time dealing with as it is. Thinking that the wisdom-of-the-crowd approach is going to work immediately for the problems they face is pretty naive.”
Luca Luceri, a research assistant professor at the University of Southern California, says that Meta’s larger pivot away from content moderation, which Zuckerberg signaled in his announcement video, is just as concerning as the removal of fact-checking. “The risk is that any form of manipulation can be exacerbated or amplified, like influence campaigns from foreign actors, or bots which can be used to write Community Notes,” he says. “And there are other forms of content besides misinformation—for instance, related eating disorders or mental health or self harm—that still need some moderation.”
The shift may also negatively impact the fact-checking industry: Meta’s fact-checking partnerships accounted for 45% of the total income of fact-checking organizations in 2023, according to Poynter. The end of those partnerships could deliver a significant blow to an already underfunded sector.
Leave a comment