JAKARTA - Artificial Intelligence (AI) is present as the largest technological change in the world. In some cases, AI may have benefits, but it cannot be denied that AI also has a negative impact.

As it continues to integrate a generative AI technology into more Google products and services, the search giant is aware of the importance of developing AI responsibly, to balance AI's positive impacts and address the potential risks that arise.

"Even though it looks complicated, we must work together to achieve long-term success," said Michaela Browning, VP, Government Affairs and Public Policy, Google Asia Pacific on the Google blog.

One way is to provide an additional context for a generative AI output, such as adding a About this result' to the generative AI feature in Google Search. This is done so that people can be helped when evaluating the information obtained.

In the coming months, YouTube will require creators to disclose altered content or realistic synthetic content, including those made with AI tools.

"We will also notify viewers about the content through the label on the description panel and video player," Browning added.

Furthermore, in a few months, YouTube will also launch a feature to remove AI-made content, or synthetic content or other engineering content that mimics a person, including his face or voice, through a privacy request process.

"We have a Prohibited Use Policy for the Release of a new AI that explains in detail the content that is dangerous, inappropriate, misleading, or illegal that we prohibit," he added.

While realizing there is no really powerful way to eradicate the deep fake' spread and AI-generated misinformation, Google says that inter-institutional collaboration will be urgently needed.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)