Instagram is failing to remove accounts that allure hundreds of sexual comments on pictures of children in swimwear or other forms of partial clothing, even after these accounts are reported or flagged.
Despite the platform’s parent company, Meta, claiming to have a zero-tolerance policy on child exploitation, the accounts that were reported as suspicious have been ruled acceptable by its automated moderation tools.
In one case, one of these accounts that shared images of children in sexualized poses was reported by a researcher. Instagram provided a same-day response stating that “due to high volume”, it had not been able to view the report, but its “technology has found that this account probably doesn’t go against our community guidelines”.
Following this, the researcher was advised to either block the account, unfollow, or maybe even report it again.
Similar accounts were also found operational on Twitter. One account, which shared pictures of a man performing sexual acts to images of a 14-year-old TikTok influencer, was deemed not to break Twitter’s guidelines.
What’s even more concerning is that the man clearly sought other people to connect with through his posts. “Looking to trade some younger stuff,” one of his tweets said. It was removed after the campaign group Collective Shout posted about it publicly.
Andy Burrows, head of online safety policy at the NSPCC, characterized the accounts as a “shop window” for pedophiles. In a statement he said:
Companies should be proactively identifying this content and then removing it themselves. But even when it is reported to them, they are judging that it’s not a threat to children and should remain on the site.
He further called for MPs to tackle technicalities in the proposed online security bill, which is aimed to regulate social media companies and will be debated in parliament on the 19th of April. He said that these companies should be forced to deal with not only illegal content but also that which is clearly harmful but may not meet the criminal threshold.