본문 바로가기
New Vocabularies

Social-Media Companies Decide Content Moderation Is Trending Down

by ciao00 2025. 1. 8.

Social-Media Companies Decide Content Moderation Is Trending Down

Meta follows in X’s footsteps by letting users fact-check each other with ‘Community Notes’

By Alexa Corse, Meghan Bobrowsky and Jeff Horwitz

Jan. 7, 2025 9:21 pm ET

 

Social-media companies never wanted to aggressively police content on their platforms. Now, they are deciding they don’t have to anymore.

Mark Zuckerberg’s announcement that Meta META -1.95%decrease; red down pointing triangle Platforms will end fact-checking and remove speech restrictions across Facebook and Instagram shows how Donald Trump’s presidential election, and the U.S. political winds that swept him into a second term, have sharply accelerated a move by social-media giants away from refereeing what is said on their platforms. 

Trump ally Elon Musk led the charge starting in 2022, when he acquired the platform then known as Twitter and slashed content-policy jobs and loosened content restrictions. In 2023, YouTube and Meta halted policies that had curbed claims of widespread fraud in the 2020 U.S. presidential election, and Meta has cut spending on trust and safety efforts as part of Zuckerberg’s efforts to enhance efficiency.

Such efforts are scaling back policies and operations that have involved tens of thousands of staffers and contractors and billions of dollars in aggregate costs—and that alienated the conservatives who are set to control both houses of Congress as well as the White House.

“These platforms are realizing that if they want to have a role in where tech policy is going to go over the next four years, this is the game they’ve got to play,” said Katie Harbath, a Republican and former Facebook public-policy director who has advocated for more guardrails around social media.

 

Zuckerberg acknowledged in his announcement Tuesday that there is “a lot of legitimately bad stuff out there,” including terrorism and child exploitation, which they’ll continue to take down. “We built a lot of complex systems to moderate content, but the problem with complex systems is they make mistakes. Even if they accidentally censor just 1% of posts, that’s millions of people,” he said. 

Shrinking or dismantling fact-checking and content-moderation systems, though, risks upsetting other users, as well as some advertisers, politicians, and employees, by supercharging the kinds of hate speech and deliberately misleading information that compelled the companies to create those systems in the first place.

“The culture has grown too woke, and as a result we are correcting for it. But moves of this sort seem like an overcorrection,” said Michael Kassan, a longtime ad executive and founder of consulting firm 3CV. “We shouldn’t throw brand-safety principles away completely.”

It also widens a divide over online speech with Europe, which has been strengthening laws making tech platforms responsible for the content they carry at the same time that the pendulum has been swinging the other way in the U.S. 

A spokesman for the European Commission said Tuesday it had no comment on Meta’s announcement since the changes are initially happening only in the U.S.

The tech giants increasingly are opting to have their users handle fact-checking and moderating—an approach that frees the companies of the monetary and political costs of doing the jobs themselves, but also brings its own problems.

Zuckerberg tipped his hat to X on Tuesday. He said Meta is getting rid of fact-checkers and, starting in the U.S., replacing them with a so-called Community Notes system similar to one on Musk’s X platform.

X’s Community Notes feature relies on volunteers to write contextual notes to be added below misleading posts. X uses an algorithm to surface notes ranked as helpful by users who are assessed to have different points of view.

Researchers who have studied Community Notes say it has some benefits. Some people perceive fellow users as more trustworthy than professional fact-checkers, and some researchers have found that users are less likely to reshare content that gets a note.

 

However, researchers say the program has shortcomings and shouldn’t replace professional fact-checkers. Notes take time to appear, sometimes after a post has gone viral. Users might try to band together to manipulate the rankings, and critics say the approach can fail on polarizing topics. 

“Since it’s based on finding consensus, it can’t work at scale,” said Alex Mahadevan, the director of MediaWise, a digital media literacy project at the Poynter Institute. The Poynter Institute runs PolitiFact, a fact-checking website that is one of Meta’s partners.

YouTube also uses outside fact-checkers, and a spokeswoman said Tuesday that the company isn’t making any changes to how it currently works with them, while declining to comment on future plans.

 

Google’s YouTube platform last year also began allowing a limited group of users to add “Notes” to videos in the U.S. similar to those on X. The notes are reviewed for helpfulness by external evaluators who also provide feedback on the site’s search results and recommendations, YouTube said.

Zuckerberg and other social-media leaders long resisted content moderation beyond what was required legally. They emphasized that they were platforms, not publishers, and shouldn’t be held liable for harmful content a user posts on their sites. Though Facebook established some content guidelines in its early years, it also routinely launched its services in languages that no one on its staff spoke. 

A series of public controversies—including revelations of Russian election interference efforts in 2016, the spread of fake news, and Facebook-fueled ethnic violence in Myanmar—spurred Zuckerberg to publicly temper his initial assertion that a light touch on moderation was a virtue.

 

But his enthusiasm for moderation was at best tepid. In November 2016, amid complaints that Facebook might have swayed the election, Zuckerberg warned: “We must be extremely cautious about becoming arbiters of truth ourselves.” 

In 2020, big social-media platforms took aggressive measures to police discourse about the U.S. elections as well as the Covid-19 pandemic. And after Trump’s supporters stormed the U.S. Capitol on Jan. 6, 2021, the major platforms suspended his accounts, citing concerns including the risk of further violence. 

Those moves angered Republicans, who said the platforms silenced legitimate viewpoints. After regaining control of the House in 2022, they investigated communications between the Biden administration and social-media companies about content moderation.

Ahead of last year’s presidential campaign, Musk’s platform, Meta and YouTube all reinstated Trump.

Cost also has played a role in the companies’ thinking. Tech companies have beefed up automated content moderation systems but the work has remained labor-intensive. While Meta has boasted about the billions of dollars it had spent on safety and security, layoffs in recent years disproportionately hit the company’s safety staff.

 

Meta’s new move “allows them to cut even further at the amount of money that they spend on trust and safety because they’re just going to do less of it,” said Laura Edelson, assistant professor of computer science at Northeastern University.

Zuckerberg has been personally upset by Meta’s system. In November 2023, he posted a picture of himself after surgery to repair a knee ligament he tore doing his hobby, mixed martial-arts fighting. “Still looking forward to it after I recover,” the CEO wrote. “Thanks to everyone for the love and support.”

The post initially received anemic attention on Facebook as a result of an algorithm change that slowed the spread of viral health-related content. Established after Meta staff determined that such material was frequently false or sensationalistic, the rule had ensnared the CEO’s innocuous post—prompting Zuckerberg to personally demand a review of the rule and potential overreach in other safety measures.

People familiar with the incident said that it snowballed into a full-scale review of the company’s algorithmic demotions. Meta spokesman Andy Stone played down the significance of the CEO’s grievance, saying Tuesday’s announcement was based on Zuckerberg’s long-held beliefs.

Meta on Tuesday also revised community standards to significantly loosen restrictions on content previously considered hate speech. For example, the updated rules permit “allegations of mental illness or abnormality when based on gender or sexual orientation” and strike down a prohibition on comparing women to “household objects or property.”

X CEO Linda Yaccarino said Meta’s announcement “couldn’t be more validating” during an onstage interview at the CES conference in Las Vegas. “We say Mark, Meta, welcome to the party.”

 

—Patience Haggin, Kim Mackrael and Miles Kruppa contributed to this article.