Google Reveals Three Policy Recommendations Related To Responsible AI Technology
Three recommendations from Google regarding AI technology (Photo: Doc. Google)

JAKARTA - Although it has many positive impacts, in fact it is undeniable that the rapid development of Artificial Intelligence (AI) technology also has negative impacts that can pose risks and challenges.

According to Google, hindering AI development is not an effective way, because it is considered to be able to eliminate various opportunities to achieve the great benefits of AI, and lag far behind those who embrace AI potential.

"In fact, we need efforts on a wider base across governments, companies, universities, and more to generate the widely distributed benefits of technological breakthroughs at the same time, minimizing the risks that arise," said Kent Walker, President of Global Affairs, Google & Alphabet on Google's official blog.

To anticipate this, Google released an official report containing various AI-related policy recommendations, as well as recommendations to the government to focus on three main areas opening up opportunities, encouraging responsibility, and increasing security:

Opening opportunities by maximizing AI's economic potential

According to Walker, AI will help a variety of different industries to produce more complicated and valuable goods and services, as well as help increase productivity even though demographic challenges continue to rise.

Well, to open up the economic opportunities AI provides and minimize labor disruption, policymakers need to invest in innovation and competition aspects, develop a legal framework that supports responsible AI innovations, and prepare workers to deal with work transitions by AI.

"For example, the government needs to explore basic AI research through national labs and research institutions to adopt policies that support the development of responsible AI (including privacy laws that protect personal information and allow the flow of reliable data across national borders)," he explained.

Encouraging responsibilities while reducing the risk of abuse

AI has helped the world face various challenges ranging from disease to climate change. However, if not developed and spread wisely, the AI system can also exacerbate current social problems, such as misinformation, discrimination, and misuse of technological tools.

Therefore, to overcome these challenges, it will require a multi-stakeholder approach in terms of governance. Some challenges will require fundamental research to better understand the benefits and risks of AI, as well as how to control it, as well as develop and spread new technical innovations in the field of interpretability and watermarking.

"For example, leading companies can gather to form a Global Forum for AI (Global Forum on AI - GFAI)," concluded Walker.

Increasing global security while preventing cybercriminals from exploiting this technology

AI has important implications for global security and stability. In fact, generative AI can help create misinformation, disinformation, and manipulated media.

To that end, the first step that needs to be taken is to set up a technical and commercial guardrail in order to prevent the use of AI for criminal acts, as well as enable collective efforts to address irresponsible parties, while maximizing the potential benefits of AI.

"For example, the government needs to explore the latest trade control policies related to the special application of AI-backed software and is considered to have security risks, as well as to special entities that provide assistance for research and development related to AI and have the potential to pose a global security threat," he concluded.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)