JAKARTA - A child protection organization revealed that it found more cases of harassment images on Apple's platform in the UK than Apple reported globally.

In 2022, Apple scrapped its plans to detect Child Sexual Abuse Material (CSAM) following allegations that the system would be used for surveillance of all users.

Apple then introduced a feature called Communication Safety, which automatically blocks and blurs nude photos sent to children.

According to The Guardian newspaper, the National Society for the Prevention of Cruelty to Children (NSPCC) in the UK said that Apple drastically reduced reports of CSAM incidents in services such as iCloud, FaceTime, and iMessage.

All US technology companies are required to report CSAM cases detected to the National Center for Missing & Exploited Children (NCMEC), and in 2023, Apple made 267 reports.

The report is claimed to be a global CSAM detection. However, the UK's NSPCC independently found that Apple was involved in 337 violations between April 2022 and March 2023 only in England and Wales.

"There is a worrying difference between the number of child abuse images that have occurred at Apple services in the UK and the number of global reports of abuse content they have made to the authorities," said Richard Collard, head of the child's online security policy at the NSPCC.

"Apple is clearly lagging behind compared to many of their colleagues in dealing with child sexual abuse when all tech companies should invest in security and prepare for the launch of the Online Security Act in the UK," Collard added.

In comparison, Google reported more than 1,470,958 cases in 2023. In the same period, Meta reported 17,838,422 cases on Facebook, and 11,430,007 on Instagram.

Apple argued they could not see the contents of the iMessage message of the user because the service was encrypted. However, NCMEC noted that Meta's WhatsApp is also encrypted, but Meta reported about 1,389,618 cases of suspected CSAM in 2023.

In response to the allegations, Apple reportedly only referred to The Guardian's earlier statement on user privacy as a whole.

Some child abuse experts are also concerned about the CSAM images produced by AI. Apple Intelligence is not going to make photorealistic images.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)