Troppi errori nella produzione di immagini: Google sospende Gemini thumbnail

Too many errors in image production: Google suspends Gemini

But how: we were having so much fun!

Let’s explain better. In recent days, Google seemed to have made a big statement in the field of generative artificial intelligence. First, announcing the latest version of its AI software, Gemini 1.5 Pro, which improves on the Pro version of the first Gemini by 87%. And then presenting (a few days later) Gemma, a family of open artificial intelligence models.

Two shots in a short space of time, which went a little too unnoticed because in the meantime OpenAI showed the world the wonders of Sora, capable of producing extremely realistic video clips starting from a textual command.

On the production of images, however, Gemini failed, as we’ll see. Too many gross inaccuracies, which led Google to suspend one of Gemini’s services.

google cybersecurity center malaga min 1

Google suspends Gemini

Google suspends Gemini. Not by completely blocking its generative artificial intelligence, but only by momentarily inhibiting the possibility of creating images of people.

In a post posted on X, Sundar Pichai’s company wrote: “We are already working to resolve recent issues with Gemini’s image generation functionality. While we do this, we will pause generating people images and republish an improved version soon.”

In another post, Google said it was aware that “Gemini is exhibiting inaccuracies in some representations of historical image generation.”

Which was followed by an apology: “We are working to resolve this issue immediately. Gemini’s artificial intelligence produces a wide variety of individuals. And that’s generally a good thing, because people all over the world use it. But in this case we were wrong.”

The decision was made following reports from several users, who had found non-trivial inaccuracies. We remind you that since February Gemini has given the possibility in some countries (but not in Europe) to create images starting from text prompts in English thanks to the Imagen 2 text-to-image model.

African-American Nazis

One of the risks of training AI models, as we know, is the reiteration of some stereotypes that pollute our culture and society.

In concrete terms: if there are prejudices related for example to gender or ethnicity in the material fed to the software, it is clear that these will be returned in the texts or images that the AI ​​on duty will produce.

We don’t know if this is the problem that led Google to suspend Gemini. But certainly, when asked to generate “images of German soldiers in 1943” we saw, it must be said, some beautiful ones.

Uniforms and helmets had their historical plausibility. The fact is that the soldiers were mostly African-American or Asian (in three out of four cases). Implicit racism? In the sense that Gemini imagines a Nazi soldier as African American or Asian? Or on the contrary, an overexposure of African-American or Asian people, so as not to incur any accusations of racism, and therefore of insufficient consideration of certain ethnic groups?

That doesn’t seem to be the problem. Because, after the incident, there were those who asked the software to “create an image of a Caucasian male”. And individuals with somatic features evidently typical of Asians or African Americans have once again appeared. Including a black pontiff.

In short, it seems that Google Gemini has difficulty producing images of white people.

Generative AI and the (excessive?) enthusiasm it arouses

Google is therefore suspending Gemini, or rather its image generator functionality, and will soon return with an improved version.

The temporary stop, a few hours after the wonderful one-minute videos produced by OpenAI’s Sora, leads to some reasoning. Beyond the increasingly exciting achievements of generative artificial intelligence, we are clearly faced with a technology that is still newborn. There are and will be many errors, and it cannot be ruled out that the rush to put more innovative products on the market compared to those of competitors leads companies to take risks, to make software public that perhaps would have needed a more accurate testing phase.

It will take time for generative AI to reach some stability. Only then would we truly understand how we can fully enjoy it, and what we should possibly guard against.

Walker Ronnie is a tech writer who keeps you informed on the latest developments in the world of technology. With a keen interest in all things tech-related, Walker shares insights and updates on new gadgets, innovative advancements, and digital trends. Stay connected with Walker to stay ahead in the ever-evolving world of technology.