Meta is abandoning the use of independent fact checkers on Facebook and Instagram, replacing them with X-style “community notes” where commenting on the accuracy of posts is left to users.
In a video posted alongside a blog post by the company on Tuesday, chief executive Mark Zuckerberg said third-party moderators were “too politically biased” and it was “time to get back to our roots around free expression”.
Joel Kaplan, who is replacing Sir Nick Clegg as Meta’s head of global affairs, wrote that the company’s reliance on independent moderators was “well-intentioned” but had too often resulted in the censoring of users.
However, campaigners against hate speech online have reacted with dismay – and suggested the change is really motivated by getting on the right side of Donald Trump.
“Zuckerberg’s announcement is a blatant attempt to cozy up to the incoming Trump administration – with harmful implications”, said Ava Lee, from Global Witness, a campaign group which describes itself as seeking to hold big tech to account.
“Claiming to avoid “censorship” is a political move to avoid taking responsibility for hate and disinformation that platforms encourage and facilitate,” she added.
Emulating X
Meta’s current fact checking programme, introduced in 2016, refers posts that appear to be false or misleading to independent organisations to assess their credibility.
Posts flagged as inaccurate can have labels attached to them offering viewers more information, and be moved lower in users’ feeds.
That will now be replaced “in the US first” by community notes.
Meta says it has “no immediate plans” to get rid of its third-party fact checkers in the UK or the EU.
The new community notes system has been copied from X, which introduced it after being bought and renamed by Elon Musk.
It involves people of different viewpoints agreeing on notes which add context or clarifications to controversial posts.
“This is cool,” he said of Meta’s adoption of a similar mechanism.
However the UK’s Molly Rose Foundation described the announcement as a “major concern for safety online.”
“We are urgently clarifying the scope of these measures, including whether this will apply to suicide, self-harm and depressive content”, its chairman Ian Russell said.
“These moves could have dire consequences for many children and young adults.”
Meta told the BBC it would consider content breaking its suicide and self-harm rules to be a “high severity” violation, and therefore subject to automated moderation systems.
Fact-checking organisation Full Fact – which participates in Facebook’s program for verifying posts in Europe – said it “refutes allegations of bias” made against its profession.
The body’s chief executive, Chris Morris, described the change as a “disappointing and a backwards step that risks a chilling effect around the world.”
‘Facebook jail’
Alongside content moderators, fact checkers sometimes describe themselves as the internet’s emergency services.
But Meta bosses have concluded they have been intervening too much.
“Too much harmless content gets censored, too many people find themselves wrongly locked up in “Facebook jail,” and we are often too slow to respond when they do,” wrote Mr Kaplan on Tuesday.
But Meta does appear to acknowledge there is some risk involved – Mr Zuckerberg said in his video the changes would mean “a trade off”.
“It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down,” he said.
The approach is also at odds with recent regulation in both the UK and Europe, where big tech firms are being forced to take more responsibility for the content they carry or face steep penalties.
So it’s perhaps not surprising that Meta’s move away from this line of supervision is US-only, for now at least.
‘A radical swing’
Meta’s blog post said it would also “undo the mission creep” of rules and policies -highlighting removal of restrictions on subjects including “immigration, gender and gender identity” – saying these have stemmed political discussion and debate.
“It’s not right that things can be said on TV or the floor of Congress, but not on our platforms”, it said.
The changes come as technology firms and their executives prepare for President-elect Donald Trump’s inauguration on 20 January.
Trump has previously been a vocal critic of Meta and its approach to content moderation, calling Facebook “an enemy of the people” in March 2024.
But relations between the two men have since improved – Mr Zuckerberg dined at Trump’s Florida estate in Mar-a-Lago in November. Meta has also donated $1m to an inauguration fund for Trump.
“The recent elections also feel like a cultural tipping point towards, once again, prioritising free speech,” said Mr Zuckerberg in Tuesday’s video.
Mr Kaplan replacing Sir Nick Clegg – a former Liberal Democrat deputy prime minister – as the company’s president of global affairs has also been interpreted as a signal of the firm’s shifting approach to moderation and its changing political priorities.
Kate Klonick, associate professor of law at St John’s University Law School, said the changes reflected a trend “that has seemed inevitable over the last few years, especially since Musk’s takeover of X”.
“The private governance of speech on these platforms has increasingly become a point of politics,” she told BBC News.
Where companies have previously faced pressure to build trust and safety mechanisms to deal with issues like harassment, hate speech, and disinformation, a “radical swing back in the opposite direction” is now underway, she added.
This article was originally published at www.bbc.com