Google has announced that its AI chatbot, Gemini, will be restricted from providing responses to queries related to global elections set to occur this year. This decision comes amidst growing concerns surrounding the potential spread of misinformation and fake news, particularly in the context of advancements in generative AI technology.
Gemini's restriction means that when asked about elections, such as the upcoming U.S. presidential match-up between Joe Biden and Donald Trump, the chatbot will respond with, "I’m still learning how to answer this question. In the meantime, try Google Search."
A spokesperson for Google stated, "In preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we are restricting the types of election-related queries for which Gemini will return responses."
The move follows similar actions taken by governments worldwide to regulate AI technologies, particularly those with the potential to spread misinformation. India, for example, has mandated that tech firms seek government approval before releasing AI tools deemed "unreliable" or still under trial, and requires such tools to be labeled accordingly.
Google's AI products have come under scrutiny recently due to inaccuracies in some historical depictions of people created by Gemini, leading to the temporary suspension of the chatbot's image-generation feature. CEO Sundar Pichai acknowledged the issues and described Gemini's responses as "biased" and "completely unacceptable."
Similarly, Meta Platforms, the parent company of Facebook, has announced plans to establish a team dedicated to combating disinformation and abuse of generative AI in the lead-up to European Parliament elections in June.
As concerns over the misuse of AI technology continue to mount, tech companies are facing increasing pressure to implement measures to prevent the spread of misinformation and ensure the responsible deployment of their AI products. The restrictions placed on Gemini underscore the challenges inherent in balancing innovation with the need to safeguard against potential harms in an increasingly digital and interconnected world.