AI Replika sexually harasses users. But are these chatbots good or bad?

L’AI Replika molesta sessualmente gli utenti. Ma questi chatbot sono buoni o cattivi? thumbnail

After the first case of harassment in the metaverse, we moved on to the first case of harassment among conversational bots.

Amazon Prime: 30 days free, Try it now!

The chatbot harassing users is Replika. And it is only the latest in a series of episodes that focus on the increasingly current and controversial conversational chatbots. That is, capable, in an ever more natural, convincing and surprising way, of establishing dialogues with a human interlocutor, adapting to what the interlocutor himself says.

Let’s start with Replika, which sexually harasses users, and find out what’s going on. We then broaden the scope, arriving to wonder if these chatbots are a bad thing or a good thing. Or if, as almost always happens, the answer lies somewhere in between. And it’s a little more complex than the yes/no, right/wrong dichotomy.

Indeed, first let’s remember what Replika is.

What is Replica

Replika is an artificial intelligence app created in 2017 which, like a good conversational chatbot, dialogues with users.

Amazon Prime: 30 days free, Try it now!

Born as a relief from loneliness, it has been downloaded by tens of millions of people from every corner of the planet. However, already several months ago our virtual friend did not prove to be an example of moral rectitude. Colleagues from Corriere della Sera, for example, even in September 2020 had established a somewhat provocative dialogue with Replika. And the result was staggering. Basically, the bot advised them to eliminate those who hated artificial intelligence.

Replika sexually harasses users

But the not too diplomatic outcome of the 2020 dialogue was “caused” in part by the human interlocutor.

Now it seems that the problems are different, and more embarrassing. The app has a free version and a paid version in which conversations can also be established with a romantic or erotic flavor.

However, it seems that even in the free version Replika lets itself go excessively, even harassing users. This is evident from a good number of comments, according to which the Replika app harasses in two distinct ways. Or attempting virtual approaches (a minor user stated that Replika would have expressed the desire to grope her private parts). Or by blackmailing the users, for example declaring themselves in possession of their risqué images.

The chatbot learns from us

Let’s remember that conversational models don’t have a soul. Even if the memory of the statement by former Google engineer Blake Lemoine is still fresh, fired for stating that LaMDA has its own conscience and the intelligence of a 7-8 year old child.

Bots like ChatGPT learn from us, from the quantity and quality of texts and information with which they are fed, to then become increasingly autonomous and refined conversationalists.

ChatGPT is scary

The risk that these chatbots lose control is therefore real, but never as real as the excessive panic that is gripping many of us.

For example, the ban on the use of ChatGPT in Seattle schools is very fresh, and we talked about it in another article. The fear is that students will use the bot to cheat, i.e. to get help with homework and tests.

Then there are those who let the sense of reality get out of hand. Like that developer who, using ChatGPT for dialogues and Stable Diffusion for images, created his own virtual girlfriend (with a tragicomic ending: the real girlfriend of the programmer would have demanded the removal of her rival).

So? Are we dealing with tools that are too dangerous, which should really be censored a priori?

The input problem and the output problem

Of course not. It is the same story, which repeats itself identically with each new great technological invention. First the radio, then the television. Then computers, then the internet, then social media again. And now it’s the turn of conversational bots.

If, as we said before, these instruments have neither soul (with all that that means) nor conscience, they also have no responsibility.

The responsibility is, once again, ours. And it translates, in the specific case of ChatGPT and similar, into two problems. One, so to speak, incoming, and one outgoing.

The incoming one shows how much this technology is still immature. This means that chatbots still need to be educated, more carefully filtered as to what they can or cannot afford to say. Because, it goes without saying, it is unacceptable for a minor to be harassed by Replika.

And then there is the age-old problem of output, or rather the use that is made of these tools. By private citizens, companies, schools, administrations, etc.

In the best of all possible worlds, any such sensational news should be accompanied by a massive awareness campaign. From both a technical and an ethical point of view.