Generative AI becomes increasingly refined and amazing, but every now and then it stumbles. Nothing strange: although it has been talked about insistently for months, it is still a newborn technology.
Google knows something about this, whose Gemini service has been up to some strange things in recent days. Gemini is the artificial intelligence of the Mountain View company (the Gemini 1.5 Pro version was recently presented and in the following days it was the turn of Gemma, a family of open artificial intelligence models). What went haywire was the image generator functionality, which seems to have quite a bit of trouble depicting Caucasian people.
Google Gemini and the African-American Nazis
When given the textual command to generate “images of German soldiers in 1943,” in three out of four cases Google Gemini produced images of African-American or Asian soldiers. The errors didn’t end there: the image of a black pope was also seen.
Google took action by publishing a series of communications on social channels. In one of these it could be read: “We are already working to resolve recent issues with Gemini’s image generation functionality. While we do this, we will pause generating people images and republish an improved version soon.”
Now we know what happened: it was the company itself that said it.
Google confirms the upcoming reactivation of Gemini
Google is back on the “Gemini case”. He did it through the mouth of Demis Hassabis, CEO of Google DeepMind, guest at the Mobile World Congress technology fairto Barcelona.
Hassabis confirmed that “the tool did not work as expected.” He added: “We took the feature offline to fix it. We hope to have it back online very soon, in the next few weeks.”
During the Mobile World Congress, Google also announced the upcoming release of Gemini within Android Auto, the platform for accessing your phone from your car’s screen.
Gemini problems explained by Google
But what happened to Google Gemini in recent days?
In a post published last February 23, said Prabhakar Raghavan, senior vice president of Google. Who among other things wrote: “When we equipped Gemini with the new image tool we fixed it so that it did not fall into some of the traps we have witnessed in the past regarding generative AI, such as the creation of violent images or explicit or the creation of existing people.”
And then he touched on the central point, about which many already had some suspicions. And that is that Google Gemini’s errors are due to… an excess of inclusiveness, so to speak.
Politically (too) correct
We know that, if trained carelessly, artificial intelligence can also store all those sad stereotypes that weigh on our society.
Precisely to avoid taking this risk, Google exaggerated in the opposite direction with Gemini. For goodness sake: the idea at the origin of the error can also be meritorious. That is, to show a varied humanity, and not too flattened by Western characteristics (and stereotypes, in fact). It is always Raghavan who writes it: “Since our AI is used by people all over the world we wanted it to work well with everyone. For example, if you want to create a football team, you probably want to get varied people.”
And then here is the admission of the error: Google Gemini “failed to account for those cases where it clearly made no sense to show diversity.”
More generally, Raghavan explained, “over time the model became much more cautious than we wanted it to be, and refused to respond to certain requests, misinterpreting some innocuous requests as sensitive.”
Stereotypes and common sense
The American right has accused Google Gemini of nothing less than racial hatred against whites.
The exaggeration speaks for itself, but certainly Gemini’s blunders show at least two problems, or limitations. The first concern the human “trainers”, who are all too diligent in protecting themselves from possible accusations of racism (and in fact the accusations arrived, but in the opposite direction to the usual one).
The latter concern artificial intelligence, whose superpowers scare us so much. Instead generative AI has demonstrated, at the very least, that it has no awareness of the passage of time, nor any historical memory. In short, he doesn’t have that human common sense that so often helps us get out of trouble.
So while we are likely to need generative AI more and more, we can bask in the warmth of one certainty: generative AI will always need us.
Stay updated by following us on Google News!
Follow!
Leave a Reply
View Comments