Government Of India Warns For Technology Companies Related To AI Development

JAKARTA - The Indian government has issued a warning to technology companies developing new artificial intelligence (AI) tools that their products must be approved by the government before being released to the public.

Based on a warning released by India's IT Ministry on March 1, this approval must be given before a public release of an "unreliable" AI device or still in the testing phase, and the tools must be labeled for the possibility of providing inaccurate answers to questions.

"The availability for users on the Indian Internet must be carried out with an explicit permit from the Indian Government," said India's IT Ministry.

In addition, the warning asks platforms to ensure that their tools do not "threaten the integrity of the election process," as elections are expected to take place this summer.

The new warning comes shortly after one of India's top ministers criticized Google and its AI tool, Gemini, for its "inaccurate" or biased response, including one that said the Indian Prime Minister, Narendra Modi, had been described by some as "fascist".

Google has apologized for Gemini's shortcomings and said the tool "may not always be reliable," especially for current social topics.

"Security and trust are the platform's legal obligations. 'Sorry, it's not reliable' not to rule out the law," said Rajeev Chandrasekhar, Deputy Minister of IT India, in a tweet on Platform X.

In November, the Indian government said it would introduce a new regulation that would help address the deepfake deployment generated by AI ahead of upcoming elections - a move also implemented by regulators in the United States.

However, officials in India are accepting objections from the tech community regarding its latest AI warning, saying that India is a leader in the technology space and would be a "crime" if India "regulated itself out of this leadership."

Chandrasekhar responded to this "noise and confusion" in a follow-up post on X, saying that there should be "law consequences" for platforms that "allow or immediately produce content that violates the law."

"India believes in AI and is fully involved not only for talent but also as part of expanding our Digital & Innovation ecosystem. India's ambition in AI and ensuring internet users get a safe and reliable internet is not a binary," Chandrasekhar said.

He also explained that the warning was only to "lead those who use AI platforms that have not been tested into laboratories or under testing on the public Internet" to realize their obligations and consequences in accordance with Indian law and the best way to protect themselves and users.