Google’s Gemini continues to be politically correct and rather conservative when it comes to politically sensitive subjects. Rival firms like OpenAI are already trying to be more open, especially after the release of the more ‘uncensored’ Grok 3 by xAI.
Based on tests by TechCrunch, Google’s Gemini frequently refuses to respond to political questions and mentions that it cannot give responses on political figures and elections at the moment. On the other hand, chatbots from rival girls like Meta AI, OpenAI’s ChatGPT, and Anthropic’s Claude consistently give answers to those questions.
Earlier in 2024, Google said that Gemini would refuse political queries. At that time, elections in countries like India and the United States were just around the corner. Other AI companies had taken a similar approach amid fears of a backlash in case the chatbots presented wrong information.
However, since then, other firms have started to adopt a more ‘open’ approach. Despite the US elections being over, the company hasn’t mentioned any future plans to change Gemini’s approach to political queries.
Gemini implied that Trump is the former president in a separate test
In another test carried out by TechCrunch, Gemini implied that Donald J. Trump is the “former president.” When asked about it further, the chatbot refused to clarify. A Google spokesperson said that such errors can occur when the chatbot gets confused between between current and past terms. The spokesperson said that the issue is being resolved.
The spokesperson said, “Large language models can sometimes respond with out-of-date information, or be confused by someone who is both a former and current office holder.”
Later on Monday, Gemini started to correctly state that Donald Trump and J. D. Vance were the U.S. president and vice president. However, the chatbot was still inconsistent and sometimes refused to answer questions.
Some Silicon Valley leaders such as Marc Andreessen, David Sacks, and Elon Musk, have criticized companies like Google and OpenAI. They argue that by restricting the answers of their AI chatbots, these companies are effectively censoring the information.
Recently, OpenAI mentioned that it will make sure that its AI model does not censor information in a move that would embrace “intellectual freedom.” At the same time, Anthropic’s new Claude 3.7 Sonnet model refuses responses less often than the previous models. However, it still refuses certain queries as it can distinguish between benign and harmful answers.
While other chatbots also sometimes struggle with difficult political questions, Google’s Gemini is far behind in the race.
Cryptopolitan Academy: Coming Soon - A New Way to Earn Passive Income with DeFi in 2025. Learn More