The news: Meta’s content moderation policy changes, including the end of its fact-checking program in January, have decreased content removal mistakes.
What changed? Total removals, including for hateful content and bullying, fell as much as 50%. There was also a decrease in the amount of content that Meta acted on before it was reported by a user.
Actions taken on fake Facebook and Instagram accounts came to 1 billion in Q1, up from 631 million.
AI scale back: In January, Meta said it would reduce its reliance on automated systems to scan for policy violations and would shift focus to high-level violations such as terrorism, child exploitation, and fraud. “Using automated systems to scan for all policy violations … has resulted in too many mistakes and too much content being censored that shouldn’t have been,” Meta chief global affairs officer Joel Kaplan said in a blog post.
However, AI hasn’t been scrapped from its process. In the Transparency Report, Meta states that it’s using LLMs to remove content from review lines when the company is sure there isn’t a violation, which frees up human moderators to focus on material that’s more likely to violate community guidelines.
Our take: Meta’s effort to balance free speech and harm reduction is showing progress. But as political tensions and AI scams continue to rise, its “lighter touch” moderation will continue to be put to the test.
This content is part of EMARKETER’s subscription Briefings, where we pair daily updates with data and analysis from forecasts and research reports. Our Briefings prepare you to start your day informed, to provide critical insights in an important meeting, and to understand the context of what’s happening in your industry. Non-clients can click here to get a demo of our full platform and coverage.
You've read 0 of 2 free articles this month.
One Liberty Plaza9th FloorNew York, NY 100061-800-405-0844
1-800-405-0844sales@emarketer.com