Partager:

JAKARTA - Google has refused to recover the account of a man after mismarking a medical image he took of his son's groin as child sexual abuse material (CSAM). Experts say it's an inevitable trap in trying to apply technological solutions to social problems.

Experts have long warned about the limitations of automated child sexual abuse image detection systems, especially as companies face regulatory and public pressure to help address the presence of sexual harassment material.

“These companies have access to vast amounts of data about people's lives. And still they have no context for actual people's lives,” said Daniel Kahn Gillmor, senior staff technologist at ACLU, as quoted by The Guardian. "There are many things where the facts of your life cannot be read by this information giant."

He added that the use of these systems by technology companies that "act as proxies" for law enforcement puts people at risk of being "swept away" by "state powers."

The man, identified only as Mark by the New York Times, took pictures of his son's groin to send to doctors after realizing that it was inflamed.

Doctors used the image to diagnose Mark's son and prescribe antibiotics. When photos are automatically uploaded to the cloud, Google's system identifies them as CSAM. Two days later, Mark's Gmail account and other Google accounts, including Google Fi, which provided his phone service, were deactivated for "malicious content" and constituted a "gross violation of company policy and may be illegal".

He later learned that Google had tagged another video he had on his phone and even the San Francisco Police Department opened an investigation against him.

Mark was eventually cleared of any criminal wrongdoing, but Google has said it will stick with its decision to deactivate his account.

"We follow US law in defining what a CSAM is and use a combination of hash matching technology and artificial intelligence to identify and remove it from our platform," said Christa Muldoon, a Google spokesperson.

Muldoon added that Google staff who reviewed CSAM had been trained by medical experts to look for rashes or other problems. They themselves, however, are not medical experts and medical experts are not consulted when reviewing each case.

According to Gillmor that's just one of the ways this system can cause damage. To overcome, for example, any limitations that algorithms may have in distinguishing between harmful sexual harassment images and medical images, companies often involve humans.

However the people themselves are inherently limited in their expertise, and getting the context right for each case requires further access to user data. Gillmor says it's a much more intrusive process that can still be an ineffective method of detecting CSAM.

"This system can cause real problems for people," he said. “And not only because I don't think that these systems can catch every child abuse case, but they have very dire consequences in terms of false positives for people. People's lives can be completely turned upside down by machines and humans in circles simply because making bad decisions they have no reason to try to fix."

Gillmor argues that technology is not the solution to this problem. In fact, it can create many new problems. This includes creating strong surveillance systems that can disproportionately harm those on the periphery.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)