WhatsApp bans 250k accounts over child sex abuse images every month

WhatsApp bans 250k accounts over child sex abuse images every month… and the true number could be even higher, MPs are told

  • Actual figure likely higher as app policies mean only brazen offenders spotted
  • WhatsApp has become a criminality hotbed as texts sent are encrypted
  • This means police are blind to the scale of child abuse images being shared 

A quarter of a million accounts linked to child sex abuse are banned from WhatsApp every month, it was revealed last night.

And the true number is expected to be significantly higher, as only the most brazen offenders can be spotted due to the messaging app’s secretive policies.

WhatsApp, which is owned by Facebook, has become a hotbed for criminality as texts sent on its platform are encrypted. 

This means police are unable to see the content inside, leaving them blind to the scale of child abuse images being shared.

A quarter of a million accounts linked to child sex abuse are banned from WhatsApp every month, it was revealed last night

The Commons home affairs committee heard the true number is masked by the use of encryption, as most people sharing such images do not display them on the public part of the app. 

But 250,000 users worldwide are banned each month, just by the explicit names and profile pictures they use to identify themselves on unencrypted sections on the app.

Committee chairman Yvette Cooper asked ‘how many examples of child exploitation must be happening’ where the content is kept private. 

WhatsApp said it could not give a figure on the number of such cases, but said it sends hundreds of thousands of referrals to police every year.

Niamh Sweeney, WhatsApp’s European Director of Public Policy, said: ‘We actually ban about 250,000 accounts every month for participating in groups which are sharing child exploitation imagery.

‘We are fully end-to-end encrypted – as is the industry standard now in private messaging – but we do have what we refer to as ‘unencrypted surfaces’.

‘They include things like your profile photo, a group photo, a group name and a group description.’

She went on to explain how the app uses technology to identify photos, names and keywords which are linked to child abuse.

Source: Read Full Article