All news articles

CX/CS

CX/CS

CX/CS

Instagram owns up to outsized human error in content moderation, highlighting the reliability and predictability of AI

Instagram owns up to outsized human error in content moderation, highlighting the reliability and predictability of AI

Lorikeet News Desk

Apr 10, 2025

TL;DR

  • Instagram cites human error as a major challenge in content moderation

  • User account issues underscore the limitations of relying solely on human moderators

  • The company said it's revising processes to improve moderator decision-making for both human and AI reviewers

Driving the news: As the public's focus on the reliability of machine learning and AI-driven moderation sharpens, Instagram's head, Adam Mosseri, has identified human error as a significant challenge in the platform's content review processes.

  • "Our reviewers were making decisions without being provided the context of how conversations played out, which was a miss," he wrote in a post on Threads, responding to user feedback that highlighted flaws in Instagram's enforcement actions.

Why it matters: Mosseri's acknowledgment of human errors comes amid recent problems faced by Instagram and Threads users, including lost account access and disappearing posts.

His comments underscore the complexities of relying solely on human moderators, at a time when public and industry debates are heating up over the reliability and accuracy of AI-driven systems. Recognizing the limitations of human oversight in the age of AI is crucial for developing more effective and balanced moderation strategies, experts say.

What's next: Instagram has already implemented adjustments to better inform moderators about the content they review. "We're fixing this so they can make better decisions and we can make fewer mistakes," Mosseri said, underlining the ongoing changes to Instagram's moderation strategy. "We're striving to provide a safer experience, and we acknowledge that we need to do better."

Bottom line: The efforts to balance human and machine involvement in content moderation are part of a broader conversation about the efficacy and ethics of technology in social spaces. As platforms like Instagram continue to evolve, integrating comprehensive data for reviewers—whether human or AI—is a step toward more accurate and fair moderation practices.

Blu background with lorikeet flypaths

Brought to you by Lorikeet

We're building an AI system that’s capable of providing high quality, human assistance because every company should be able to scale exceptional CX.

Learn More

Blu background with lorikeet flypaths

See Lorikeet
in action

We're building an AI system that’s capable of providing high quality, human assistance because every company should be able to scale exceptional CX.

Learn More

Blu background with lorikeet flypaths

Brought to you by Lorikeet

We're building an AI system that’s capable of providing high quality, human assistance because every company should be able to scale exceptional CX.

Learn More