Google updates Gemini AI, apologizes for ‘woke’ inaccurate imagery

Google released two updates to its artificial intelligence (AI) model, Gemini, amid a storm of controversy on social media over “woke” inaccurate imagery produced by the model.

On Feb. 20 and then again on Feb. 21, Gemini received two new upgrades to its system. The first applies to Gemini Advanced, allowing it to now edit and run Python code snippets directly in its user interface. Users with access to the advanced version can experiment with Python-based code to see how it works prior to copying it, claiming to save time and verify functionality.

The second update hit Gemini business and enterprise plans, which now give users access to 1.0 Ultra — one of Google’s most advanced models — and employ enterprise-grade data protection. This prevents Gemini models from using conversations for training purposes.

This was previously a concern for major companies, such as mobile services provider Samsung, which banned employees from using ChatGPT-like AI tools on Samsung-owned devices and internal networks in May 2023.

These two updates followed the most recent one on Feb. 8, through which Google rebranded its chatbot from Bard to Gemini and included many upgraded features and capabilities.

However, neither of the updates mentioned any change to Gemini’s image output. This topic has recently caused an uproar on social media due to the “woke” imagery produced by the model, which creates inaccurate depictions of history.

Google Gemini AI developer Jack Krawczyk posted on X that the company is “aware that Gemini is offering inaccuracies in some historical image generation depictions” and said the team is trying to implement “immediate” solutions.

We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately.

As part of our AI principles https://t.co/BK786xbkey, we design our image generation capabilities to reflect our global user base, and we…— Jack Krawczyk (@JackK) February 21, 2024

Meanwhile, the image inaccuracies of Gemini have been rubbing social media users the wrong way, with one user who claims to be a developer working at Google posting that they’ve “never been so embarrassed” to work for the company.

I’ve never been so embarrassed to work for a company. pic.twitter.com/eD3QPzGPxU— St. Ratej (@stratejake) February 21, 2024

Another user also pointed out a similar bias in the popular AI chatbot ChatGPT, developed by OpenAI, when prompting it to create specific images.

The user questioned why ChatGPT is getting a “free pass” from internet scrutiny for the same issue.

Er guys… it looks like chatGPT has the same woke moralising issue.

Why are they getting a free pass? pic.twitter.com/a1ZBz69iZw— Liv Boeree (@Liv_Boeree) February 21, 2024

In response, Elon Musk posted about his own AI model, Grok, being “so important” as it prepares to release new upgraded versions in the coming weeks.

Perhaps it is now clear why @xAI’s Grok is so important.

It is far from perfect right now, but will improve rapidly. V1.5 releases in 2 weeks.

Rigorous pursuit of the truth, without regard to criticism, has never been more essential.— Elon Musk (@elonmusk) February 22, 2024

Nonetheless, Grok has been at the center of attention for another reason: having the same name, albeit a different spelling, as the AI language processing united (LPU) chip Groq that was trademarked and developed in 2016.

Groq gained internet hype overnight after its first public benchmark tests emerged, outperforming the top models by other Big Tech companies.