From AI Angels To Data Demons – Did Google’s Gemini Cross The Line?

Posted by

Google’s generative AI is generating concerning content.

Welcome to the new world.
It’s the intricate dance between innovation and responsibility.
Google’s re-introduction of Gemini (formerly known as Bard, and their response to Open AI’s ChatGPT) made headlines this week when it’s ability to generate images met with immediate controversy.
These are the pitfalls that will always accompany AI advancements.

Here’s what happened…

Gemini’s generation of racially diverse images of historically inaccurate figures, including people of color dressed as Nazis, sparked widespread offense and forced Google to quickly shut that feature down.
Google’s CEO, Sundar Pichai, swiftly acknowledged the mishap, emphasizing a commitment to rectify the inaccuracies and biases — a move that speaks volumes about the iterative nature of AI development and the critical need for real-world testing.

Welcome to the double-edges sword of AI innovation.

Google’s rapid deployment of generative AI capabilities within Gemini serves as a potent reminder of the risks inherent in pushing technological boundaries, without fully considering societal impacts.
So the discussion is less about how did this happen but much more about cultural and ethical considerations in AI training data and algorithm design.
Sure, we need know the importance of diversity and inclusivity within AI development teams to mitigate inherent biases.
But these teams and alogrithms may only be as strong as the data it’s fed.

Is AI simply a mirror reflecting back the better (or worse) of our angels?

The training data is clearly reflecting our societal prejudices from the training data, which is (to me) the much bigger problem than simply pointing our fingers at the developers.
Technology alone cannot solve these problems.

So…

Does this impact brand and consumer trust?
What seems like a public relations rollercoaster (that led to public reaction… Google’s Gemini Headaches Spur $90 Billion Selloff), we’re now feeling the fragile nature of trust in tech giants and the potential financial consequences of AI deployment missteps.
Still, Google responded quickly and the stock price is adjusting.
This highlights the potential for recovery through transparent and responsible actions.

What’s next?

We’re going to be hearing these terms and phrases a lot in the coming weeks, months and years:

  • Ethical AI development. 
  • Continuous AI algorithm monitoring.
  • Agility in development and communications in AI.
  • Ability to address unforeseen consequences of AI usage.

There will be more (much more).

This is what Elias Makos and I discussed on CJAD 800 AM. Listen in right here.

Before you go… ThinkersOne is a new way for organizations to buy bite-sized and personalized thought leadership video content (live and recorded) from the best Thinkers in the world. If you’re looking to add excitement and big smarts to your meetings, corporate events, company off-sites, “lunch & learns” and beyond, check it out.