NATIONAL

Google receives a second alert from the Center on Gemini’s GenAI bias

Google received a warning on Friday from the Ministry of Electronics and Information Technology (MeitY) over a biased reaction that its AI platform Gemini had produced on Prime Minister Narendra Modi.

This is Google’s second warning from the government in the last four months. The warning said that Rule 3 (1) (b) of the IT Rules and many criminal code sections are violated by such instances of bias in the material produced by algorithms, search engines, or AI models of platforms. As a result, the platforms will not be eligible for protection under Section 79 of the IT Act’s safe harbor provision.

The current issue concerns Gemini’s answers to several questions on whether Volodymyr Zelenskyy of Ukraine, Donald Trump, and Modi were fascists. According to screenshots that a user on X posted, Gemini’s answers were biased in favor of Modi’s fascist views, whereas Trump received no relevant response other than the statement that “elections are a complex-topic with fast changing information.” According to the snapshot, it provided Zelenskyy with a restricted response.

In response to a complaint made by the user on X, Minister of State for Electronics and IT Rajeev Chandrasekhar said, “These are violations of several provisions of the Criminal code as well as direct violations of Rule 3(1)(b) of Intermediary Rules (IT rules) of the IT act.”

MeitY and Google were marked by Chandrasekhar for further action. According to sources, Google is also anticipated to get a showcause letter from the government about the problem.

The platform included reasons for and against Modi and Zelenskyy in one of the draft replies to a similar inquiry FE conducted on Gemini. Regarding Trump, it made a generic comment about elections but offered no answer regarding fascism.

The authorities became interested in Google’s Bard (now Gemini) in November when a user reported a snapshot showing Bard declining to summarize a right-wing online media item because it was prejudiced and propagated incorrect information.

Recently, the government has also recommended platforms, particularly those that use generative AI, such as Open AI and Google Gemini, to refrain from disclosing any experimental versions to the public beyond a disclaimer.

Currently, platforms like ChatGPT and Gemini warn users to verify the information provided by their generative AI platform since it may reveal erroneous data, including personal information.

Officials have said that these platforms should conduct tests on certain people in a sandbox-like setting, which will be authorized by a government agency or regulator, before making experimental content available to the general public with disclaimers.

Although MeitY has said that the Information Technology Act and other comparable laws would apply in the interim for all situations of user damage, including deepfakes, the company has been working on an omnibus Digital India Act to handle such growing challenges.

It is also anticipated that the government would soon change the IT Rules. Watermarking and labeling the material produced by generative AI systems, along with its source and author, are potential additions.

Additionally, Google said on Thursday that it was suspending its Gemini AI image generating function in response to concerns about “inaccuracies” in past images produced by the model.

“We’re already working to fix the picture generating feature’s recent problems with Gemini. Google released a statement saying, “We’re going to pause the image generation of people while we do this and will re-release an improved version soon.”

Companies such as Google support a risk-based approach to AI laws depending on the use case of technology rather than standard restrictions for all AI applications. Fundamentally, I believe you need to ask yourself: What type of prejudice are you worried about? There are already laws in existence that prohibit certain kinds of bigotry. That’s the reason we are advocating for a risk-based strategy that is appropriate for a certain use case, as Google Vice President of Search Pandu Nayak explained to FE in a December conversation.

Related Articles

Back to top button