JAKARTA - This week, at the Google Responsible AI Summit in Paris, Google VP Trust & Safety, Laurie Richardson revealed several Google efforts in maximizing the benefits and minimizing the risk of using artificial intelligence (AI) technology.

First, the Trust & Google Team has conducted training, and a red-teaming technique to ensure that the released GenAI products are built responsibly. Google is also committed to sharing our approach more broadly.

Second, the Google team in Trust & Safety also uses AI to improve the way companies protect users online. With AI, they can identify content that violates on a large scale.

In addition, by using LLM, the search giant is able to build and train models quickly in a matter of days. For example, with LLM, they can expand the scope of types of harassment, context, and language in a way that has never been found before.

Finally, to deal with AI-generated content, Google regularly collaborates with other industries and ecosystems to find the best solution.

Earlier this week, Google brought researchers and students together to interact with their safety experts to discuss risks and opportunities in the AI era.

To support an ecosystem that produces research that has an impact on real-world applications, Google has also doubled the number of recipients of this year's Google Academic Research Awards to develop investment in Trust & Safety research solutions.

"We are committed to implementing AI responsibly starting from using AI to strengthen our platform against abuse to developing tools," said Laurie.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)