Categories: Tech

Blake Lemoine, a Google engineer who defined an artificial intelligence as sentient, was fired

The sensational news came a few weeks ago, and we told you about it in another article.

A Google engineer, Blake Lemoine, had had a dialogue with LaMBDA, an artificial intelligence. From the same dialogue, the engineer Lemoine had drawn the conviction that the AI ​​in question was sentient. In short, he was endowed with autonomous intelligence.

The problem is not so much that Blake Lemoine had drawn the conviction, but that he had divulged it, allowing Medium to publish it in its entirety. From there, Google’s decision to suspend Lemoine, putting him on paid leave. The reason, however, was not due to the content of the engineer’s statements. But the fact that he spread them without asking for permission.

Now, however, two things are drastically changing. The suspension turned into dismissal. And the reason no longer seems to be the breach of the confidentiality report.

Let’s find out the details of what has happened in recent days.

Blake Lemoine

Blake Lemoine fired: what happened

Google has therefore fired Blake Lemoine. This was said by the engineer himself, during the latest episode of the American journalist Alex Kantrowitz’s Big Technology Podcast, available from Wednesday 20 July.

Already initially it seemed clear that the violation of confidentiality privacy was only a partial reason for the company’s distance from the engineer.

Immediately following the publication of the dialogue between Lemoine and LaMDA, a spokesperson for the Mountain View-based company said: “Our team of ethics and technologists examined Blake’s concerns and informed him that the evidence does not support his. affirmations. They have been told that there is no evidence that LaMDA is sentient (and that there is a lot of evidence to the contrary) ”.

The new company statement

After the dismissal, there is an even firmer note from Google. “LaMDA has undergone 11 different reviews, and earlier this year we also published a research paper outlining the work we have done in its responsible development.

We found Blake’s claims that LaMDA is sentient to be totally unfounded and worked with him for many months to clarify this point.

It is regrettable that, despite his long efforts on this issue, Blake has chosen to continue to violate data security policies, which include the need to safeguard product information ”.

The feeling is therefore that Blake Lemoine was fired not so much for having disclosed some data, but for stubbornly affirming what the company has definitely denied in recent months.

Blake Lemoine after Google

Lemoine, after the suspension, said he felt abused by Google. There had been attitudes bordering on stalkerism: some colleagues had invited him to face their mental problems.

Lemoine leaves Google after seven years, and is likely to work elsewhere on artificial intelligences.

Lemoine, LaMDA, Google and the scientific community

However, don’t think of Lemoine as being persecuted by the company headed by Sundar Pichai.

Much of the scientific community agrees that the sensationalist statements of the former Google engineer are excessive. The basic idea is that LaMDA has provided surprising answers, but in any case deriving from the processing of data and not from its own conscience.

This is what Rita Cucchiara, director of the AI ​​Research and Innovation Center of the University of Modena and Reggio Emilia says, interviewed by Repubblica. Cucchiara said that “if we have to think about what the Google engineer who had the task of testing the interface of LaMDA said, and having asked questions to the system, he interpreted the answers as those of an independent and conscientious being. , such as to be able to have rights as a person, it seems to me that we are quite far from reality ”.

The dialogue between Lemoine and LaMDA

After the dialogue with LaMDA, published in full on Medium, Lemoine had declared with some certainty that he had found himself in front of a sentient machine.

Lemoine had established the sensitivity of LaMDA as being what a 7-8 year old child might have. What surprised him above all were some responses that would have made it clear how the machine felt fear and other feelings. Which, of course, should be unrelated to artificial intelligence.

Ma a response in which it is made explicit that you have feelings, it goes without saying that it does not necessarily mean having them. But having learned to respond appropriately when it comes to feelings.

And if there was more, and Lemoine had been removed to keep who knows what secret, well: it is a suggestive hypothesis, of which however we do not (yet) know anything.

Published by
Walker Ronnie

Recent Posts

Working Dice Links of Monopoly Go for Free (May 2024)

In the world of mobile gaming, Monopoly GO is a popular game known for being…

3 days ago

F1, Miami GP: Racing Bulls unveils a special livery

In view of the sixth F1 round of the season which will stop in the…

3 days ago

Salernitana-Atalanta: where to watch the match?

Different motivations but same objectives, score points. So let's find out where to watch Salernitana-AtalantaTelevision…

4 days ago

Technology at the service of the user experience, the example of QuiGioco

Let's discover QuiGioco together, a new platform in the great universe of online casinos and…

4 days ago

Amazon Prime Video: all the new releases of May 2024

Amazon Prime Video releases for May 2024: here are the films, shows and TV series…

4 days ago

Anime Breakfast: Spy x Family Code White, una recensione tra spie e killer

In this new episode of Anime Breakfast, this time a review, let's find out together…

4 days ago