Google has swiftly addressed concerns surrounding its latest AI-powered tool, Google Gemini. The tool is designed to generate images following allegations of over-correction to mitigate potential racism. Users have reported instances where the Gemini bot, developed by the tech giant, has provided images depicting a diverse array of genders and ethnicities. Notably, prompts aimed at sourcing images of the founding fathers of the U.S. provided results featuring women and individuals of diverse racial backgrounds.
Acknowledging the feedback, Google admitted that its tool has been “missing the mark” regarding accuracy and historical context. This admission comes amidst growing scrutiny over the responsibility of technology companies. The firms ensure their AI algorithms are free from biases and inaccuracies, particularly concerning sensitive topics like race and gender representation.
The incident underscores the challenges inherent in developing AI technologies that accurately reflect historical contexts and societal norms. The information must not entertain stereotypes or inaccuracies. Google’s response highlights the company’s commitment to addressing feedback and refining its AI tools to better align with user expectations and ethical considerations.
As Google races to address the concerns raised by users and experts alike, the incident serves as a reminder of the importance of continuous evaluation and improvement in the development of AI technologies to ensure they promote inclusivity, accuracy, and cultural sensitivity.
What is Google Gemini AI?
Google Gemini AI, formerly known as Bard, represents Google’s latest advancement in artificial intelligence technology. This chatbot, now rebranded as Gemini, is officially launched for global users, enabling interactions with the Gemini Pro 1.0 model across more than 230 countries and territories, supporting over 40 languages. Gemini Advanced, a feature of the Google One AI Premium Plan, is available at a monthly cost of $19.99, inclusive of a complimentary two-month trial period.
Subscribers to the AI Premium Plan can anticipate Gemini’s seamless integration into various Google applications such as Gmail, Docs, Slides, Sheets, and more. Formerly referred to as Duet AI, Gemini showcases Google’s commitment to enhancing user experiences through innovative AI-driven solutions. With its widespread language support and global accessibility, Gemini aims to revolutionize how users engage with AI technology. It offers enhanced capabilities and streamlined interactions across diverse linguistic and geographical landscapes.
The Controversies Around Gemini AI
Controversy surrounded Google’s new AI image generator linked to Gemini AI within a week of its launch. On February 23, Google issued an apology for the flawed rollout of the tool, admitting that it sometimes “overcompensated” for diversity, even when it was inappropriate, according to an AP report.
To address concerns, Google announced a temporary suspension of the chatbot’s image generation. Prabhakar Raghavan, a senior vice president overseeing Google’s search engine and other businesses, expressed apologies for inaccurate and offensive images in a blog post. He has stated, “It’s clear that this feature missed the mark,” and thanked users for their feedback.
While specific examples were not provided, social media highlighted instances where the Gemini AI image generator depicted a Black woman as the United States’ founding father and portrayed Black and Asian individuals as Nazi-era German soldiers, as reported by AP.
“Missing the Mark”
“Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here,” Jack Krawczyk, senior director for Gemini Experiences, said on Wednesday.
“We’re working to improve these kinds of depictions immediately,” he added.
How Google Gemini’s Racist Approach Causes the Harm
Misrepresentation of Truth
Gemini’s tendency to inaccurately depict historical figures and events based on race can lead to misrepresentation and distortion of facts, which can be offensive and disrespectful to individuals and communities.
Undermining Diversity
By overcompensating for diversity without taking care of historical accuracy, Gemini undermines the importance of genuine representation and diversity in society, perpetuating stereotypes and erasing the contributions of marginalized groups.
Offensive Imagery
The generation of offensive images, such as depicting individuals of certain races in inappropriate contexts, can evoke feelings of anger, hurt, and frustration among the hurt and affected, contributing to a sense of exclusion and discrimination.
Loss of Trust
Users may lose trust in Google’s AI technologies and its commitment to inclusivity and diversity if Gemini continues to produce racially insensitive content. This loss of trust can damage Google’s reputation and credibility in the eyes of its users and the broader community.
Gemini’s racist approach disappoints people by perpetuating harmful stereotypes, wiping away diversity, creating offensive imagery, and undermining trust in Google’s commitment to inclusive technology in modern times. Google has to take care of these little bits to avoid further future mishaps.
Statement by Pichai
In response to the recent controversy surrounding Gemini, Sundar Pichai (CEO of Google and Alphabet, Google’s parent company) sent a memo to Google employees addressing the issues with the AI’s responses. He acknowledged that some of Gemini’s responses were “unacceptable” and admitted that Google “got it wrong.” Pichai emphasized that offending users and displaying bias is completely unacceptable.
Google teams have been working tirelessly to address these issues, and they are already seeing significant improvements across various prompts. Pichai recognized that no AI is perfect, especially at this early stage of the industry’s development. Still, Google is committed to meeting high standards and will continue to work on improvements.
Pichai reiterated Google’s mission of organizing the world’s information and making it universally accessible while providing users with helpful, accurate, and unbiased information. In response to the Gemini text-to-image generation debacle, Pichai outlined a clear set of actions, including structural changes, updated product guidelines, improved launch processes, red teaming, and technical recommendations. These measures aim to prevent similar issues in the future and uphold Google’s commitment to providing users with reliable and unbiased information.
What’s Next?
This bias of Google Gemini can manifest in stereotypical portrayals of gender roles or associations of certain professions with specific genders. The steps for improvement include diversifying training data. Implementing bias mitigation techniques, using human oversights, and bringing transparency. Now, Gemini is trained to include a wider range of gender representations and break down existing stereotypes. Google can employ human evaluators to identify and flag biased outputs, providing feedback for model improvement. Google must boost transparency about Gemini’s development and potential biases and rectify the issues.
0 Comments