The sensational news came a few weeks ago, and we told you about it in another article.
A Google engineer, Blake Lemoine, had had a dialogue with LaMBDA, an artificial intelligence. From the same dialogue, the engineer Lemoine had drawn the conviction that the AI in question was sentient. In short, he was endowed with autonomous intelligence.
The problem is not so much that Blake Lemoine had drawn the conviction, but that he had divulged it, allowing Medium to publish it in its entirety. From there, Google’s decision to suspend Lemoine, putting him on paid leave. The reason, however, was not due to the content of the engineer’s statements. But the fact that he spread them without asking for permission.
Now, however, two things are drastically changing. The suspension turned into dismissal. And the reason no longer seems to be the breach of the confidentiality report.
Let’s find out the details of what has happened in recent days.
Blake Lemoine
Blake Lemoine fired: what happened
Google has therefore fired Blake Lemoine. This was said by the engineer himself, during the latest episode of the American journalist Alex Kantrowitz’s Big Technology Podcast, available from Wednesday 20 July.
Already initially it seemed clear that the violation of confidentiality privacy was only a partial reason for the company’s distance from the engineer.
Immediately following the publication of the dialogue between Lemoine and LaMDA, a spokesperson for the Mountain View-based company said: “Our team of ethics and technologists examined Blake’s concerns and informed him that the evidence does not support his. affirmations. They have been told that there is no evidence that LaMDA is sentient (and that there is a lot of evidence to the contrary) ”.
The new company statement
After the dismissal, there is an even firmer note from Google. “LaMDA has undergone 11 different reviews, and earlier this year we also published a research paper outlining the work we have done in its responsible development.
We found Blake’s claims that LaMDA is sentient to be totally unfounded and worked with him for many months to clarify this point.
It is regrettable that, despite his long efforts on this issue, Blake has chosen to continue to violate data security policies, which include the need to safeguard product information ”.
The feeling is therefore that Blake Lemoine was fired not so much for having disclosed some data, but for stubbornly affirming what the company has definitely denied in recent months.
Blake Lemoine after Google
Lemoine, after the suspension, said he felt abused by Google. There had been attitudes bordering on stalkerism: some colleagues had invited him to face their mental problems.
Lemoine leaves Google after seven years, and is likely to work elsewhere on artificial intelligences.
Lemoine, LaMDA, Google and the scientific community
However, don’t think of Lemoine as being persecuted by the company headed by Sundar Pichai.
Much of the scientific community agrees that the sensationalist statements of the former Google engineer are excessive. The basic idea is that LaMDA has provided surprising answers, but in any case deriving from the processing of data and not from its own conscience.
This is what Rita Cucchiara, director of the AI Research and Innovation Center of the University of Modena and Reggio Emilia says, interviewed by Repubblica. Cucchiara said that “if we have to think about what the Google engineer who had the task of testing the interface of LaMDA said, and having asked questions to the system, he interpreted the answers as those of an independent and conscientious being. , such as to be able to have rights as a person, it seems to me that we are quite far from reality ”.
The dialogue between Lemoine and LaMDA
After the dialogue with LaMDA, published in full on Medium, Lemoine had declared with some certainty that he had found himself in front of a sentient machine.
Lemoine had established the sensitivity of LaMDA as being what a 7-8 year old child might have. What surprised him above all were some responses that would have made it clear how the machine felt fear and other feelings. Which, of course, should be unrelated to artificial intelligence.
Ma a response in which it is made explicit that you have feelings, it goes without saying that it does not necessarily mean having them. But having learned to respond appropriately when it comes to feelings.
And if there was more, and Lemoine had been removed to keep who knows what secret, well: it is a suggestive hypothesis, of which however we do not (yet) know anything.
Leave a Reply
View Comments