Geoffrey Hinton leaves Google after 10 years: ‘AIs are dangerous’

Geoffrey Hinton lascia Google dopo 10 anni per contrasti sull’uso dell’AI: “Sono pericolose” thumbnail

After 10 years of work at Google the 75-year-old Geoffrey Hintonnicknamed the Godfather of AI, leaves the company in open opposition to the decision to develop the models of artificial intelligence.

For his work in AI, Hinton won a Turing Award in 2018, but now he seems to have changed his mind. For the scientist the new models of artificial intelligence are dangerous for humansgoing so far as to state that “they could kill us all”. And if for some this is just science fiction worthy of the most dystopian novels of Philip K. Dick, for many of Hinton’s other colleagues the prospect of a future dominated by machines is not so distant. In fact, last month also the writer Eliezer Yudkowsky he had underlined the risks for humanity of a world dominated by AI (find his article in TIME).

The Republic reports that Hinton would have left Google to not be the Robert Oppenheimer of AI. Oppenheimer, we recall, is the one who helped the USA build the first atomic bomb. But is AI really comparable to a weapon of mass destruction? With due proportions of reasoning, according to Hinton, yes.

Geoffrey Hinton: “I’m leaving Google to be able to talk about the dangers of AI”

In a long interview that appeared in the New York Times, Hinton explained in depth the reasons that led him to leave Google. The scientist claimed to have left to “be able to speak freely about the dangers of AI”, explaining that as a Google employee he could not have done so.

More than physical survival and killing machines, Hinton’s fears are directed towards the dangers of generative AIs. According to the former Google employee, the images and texts generated by artificial intelligence (and related language models) are likely to surpass human-created content. In this way it would be “impossible to distinguish the true from the false”. A condition that, according to Hinton, risks leading humanity to extinction.

Recall that, at present, AI models have no self-awareness and are absolutely devoid of moral implications. This means that they are unable to do evil or to aspire, for those who are still thinking of a future dominated by machines, of a power or of rebellion. Despite this, as Hinton himself explains, they can be used by man to do harm. “Today it’s hard to imagine how you can stop bad guys from using it [l’AI, ndr] to do bad things.”

For Hinton, the only solution is to stop developing artificial intelligence. In short, stop before reaching the point of no return. Yes, it really looks like science fiction.

Walker Ronnie is a tech writer who keeps you informed on the latest developments in the world of technology. With a keen interest in all things tech-related, Walker shares insights and updates on new gadgets, innovative advancements, and digital trends. Stay connected with Walker to stay ahead in the ever-evolving world of technology.