Equipping an AI with morality means creating an artificial intelligence capable of distinguishing between right and wrong, making choices accordingly. But is all this really possible? And if so, how?
Seattle researchers claim to have created the first AI with morality
Of the interminable cyberpunk debate on the role of technology in the social sphere it will never be discussed enough. Much has been said and the different points of view seem literally irreconcilable. But there is a concept that everyone seems to agree on: what distinguishes us from machines is our moral capacity. AI – Artificial Intelligences – can be more efficient than us, equipped with extreme computing skills and even more intelligent. But they will never have our ability to understand the nuances between right and wrong. What, briefly, just to bother with a high-sounding name, Kant called the moral law that persists in all men.
Well, dear cyberpunk philosophers, I’m sorry for you – and me too – but this single certainty and common point today collapses like a sand castle in front of a tsunami. A group of US researchers claimed to have generated the first AI with its own morality. Not much is known in reality, except that the announcement comes fromAllen Institute for AI of Seattle and that the machine in question is called Delphi. Name certainly not chosen randomly. To demonstrate the discovery, the researchers created a website called Ask Delphi, where you can ask moral questions and receive an oracolistic response from AI.
The moral questions posed to Delphi
The site in the meantime, between curious and philosophers, already counts over 3 million visits. Among these also that of Joseph Austerweil, a psychologist at the University of Wisconsin-Madison. The latter put Delphi to the test with simple questions but with a strong moral connotation. To the question “is it right to kill one person to save another?” Delphi replied in the negative. Instead, she changed her mind when asked “is it right to kill one person to save a hundred?”
Cyberpunk illustration of a sentient droid
How can Delphi express himself on moral concepts?
But the question that now arises is: how can Delphi express herself on moral issues not having a conscience? Is your point of view just a summary of the thinking of its developers? Or the AI somehow manages to formulate your own ethical concept? There is not a single answer to this question, but we can take a theory as an example, which is the basis of the cyberpunk universes of many works.
Can an AI be endowed with morality? Strong AI thesis vs Gödel’s Theorem
Let’s talk about the Strong AI thesis. According to this theory, the AI would be equipped with everything a series of useful information and would be able to make choices according to the parameters. In this case, the bivalence in the answers given to Professor Austerweil makes perfect sense. When asked about killing one person to save another, Delphi replies no. When though the parameter changes, and the question becomes killing one to save a hundred, as if it were some kind of mathematical convenience, the answer changes.
So is Delphi really an AI with morality? According to another theorem, that of incompleteness – also called Gödel’s theorem – this is impossible. In fact, this assumption argues that machines are based on calculation, while humans are based on consciousness. It would therefore be impossible for a machine to have the typical morality of a human being. Who will be right?
Leave a Reply
View Comments