The dialogue between artificial intelligence and humans, in recent times, is not exactly producing comforting results.
For example, the news of the dismissal of Blake Lemoine, a Google engineer who had had a chat with LaMBDA, is very recent. That is with an AI who, according to Lemoine himself, would have possessed an autonomous intelligence, comparable to that of a 7-8 year old child.
At first suspended for having disseminated confidential documents (the dialogue was published in full by Medium), Blake Lemoine was then permanently removed from the company.
But if only one professional has paid the price for Google, what concerns an artificial intelligence created by Meta has had at least more embarrassing consequences for a larger number of people. Let’s find out what happened.
BlenderBot 3
The artificial intelligence created by the giant of Menlo Park should go to boarding school. What do you mean? We will explain it to you immediately.
Meta came up with a chatbot and called it BlenderBot 3. A chatbot, let’s remember, is software that can simulate a conversation with a human being.
In short: a virtual assistant to contact for information. From which, therefore, one would expect exhaustive, impeccable and – why not – elegant answers.
BlenderBot 3 has been trained (or trained? Good question) with a huge amount of text, taken from the web, so to speak. And it is therefore able to provide complex and coherent answers with the question that is asked.
The chatbot is so far only available in the United States. And (thankfully) it’s still in the testing phase.
BlenderBot 3 and the lightning-fast answers on Meta
Luckily BlenderBot 3 is not a chatbot serving the Queen of England. Because in terms of diplomaticity and bon ton, the designers still have a lot of work to do.
Let’s take some examples. Towards his boss, Mark Zuckerberg, artificial intelligence has expressed itself in a very explicit way. In the following words: “Mark Zuckerberg is a very good businessman. Sure, it’s funny that he has so much money and always wears the same clothes. “
Venial comment, when compared to another, in which ai called Zuckerberg “strange and manipulative”.
From bad to worse when, at the press presentation, some BBC journalists asked the chatbot what he thought of Mark Zuckerberg and his company, Meta.
On the first, in reference to the Cambridge Analytica case, BlenderBot 3 stated that Zuckerberg “made a mess when he testified to Congress. He worries me for our country ”.
In a schizophrenic way, to the same question asked by another journalist, the artificial intelligence said: “He is a fantastic man and very intelligent. He is my favorite billionaire! “
Ma silence fell after BlenderBot 3’s comment on Meta: “His company exploits people for money and he doesn’t care. He has to stop! ”.
BlenderBot 3, politics and morals
BlenderBot 3 appeared neither up-to-date nor progressive on the political side, having declared that “Donald Trump was, and always will be, the president of the United States.”
Furthermore, the Meta chatbot supported bizarre conspiracy theories, especially of an anti-Semitic nature, as well as a series of gross hoaxes taken from social networks.
One can only hope that, as is the case with chatbots, the more BelnderBot 3 learns, the more it will improve the accuracy (and… the refinement) of its answers.
Why does BlenderBot 3 say what it says?
Having ascertained the fact that it will still be a long time before that daredevil of BlenderBot 3 can be launched in the world, it remains to wonder why his answers are at least unsettling.
Artificial intelligences, we said, relate words gradually stored. So, in theory, the more time passes, the more accurate the answers will be. Let’s say it better: the more time passes, the more the answers will resemble us. That is, they will be increasingly comparable to those that a web user would give.
The problem, if we want, is right there. Because unfortunately the network and social networks are teeming with fake news, hate content, conspiracy theories and claims without scientificity.
Ma, compared to the low propensity of human beings to improve their cultural, moral and civic level, BlenderBot 3 has a great advantage. When you chat with Meta’s chatbot, in fact, you have the possibility to evaluate the quality of its answers with a thumbs up or down.
However, there is an additional risk: if the evaluator is a conspiracy theorist, how will he judge a possible answer from the chatbot praising a conspiracy theory?
Someone help us.
Leave a Reply
View Comments