Dear readers, you can confide it to us: are you among those who have already had a conversation with ChatGPT, or do you belong to the category of people who are horrified just at the idea of pronouncing his name, as happened at Hogwarts for Voldemort?
Not only is the quantity of articles and services that continue to appear on OpenAI’s most advanced conversational chatbot remarkable, not to mention the huge investment of money by Microsoft (and by competing companies, which are hurrying to develop artificial intelligence as similar as possible).
But what is equally evident is the fact that, to put it bluntly, ChatGPT scares us and attracts us, and both the emotions it arouses in us seem irrepressible. It’s kind of the same effect the dark has on children. How come?
To answer, let’s start by noting the sensational (and perhaps unique) reception that ChatGPT has had and is having.
After that, delving into its characteristics, let’s try to understand why ChatGPT scares us and attracts us at the same time.
Here comes ChatGPT. And it’s immediately banned
Since ChatGPT was launched (officially on November 30, 2022) and has taken hold, there has been panic.
In another article, we briefly listed the series of bans that the conversational chatbot had to undergo. We are talking about schools and universities all over the world. The ban started from the schools of New York and then involved various countries, from France to Australia to India.
Not to mention other countries where ChatGPT has never been accessible: China, Afghanistan, Belarus, Iran, Russia, Ukraine and Venezuela.
The news scares. But is that all?
In our aforementioned article we said that an attitude of a priori closure, as well as the opposite one of unconditional adhesion, have always divided the population at the arrival of a new and disruptive technological tool.
Thus, for example, it was, in its time, first for radio and then for television.
But perhaps, in retrospect, this is a partial point of view, which does not enter into the specificity of ChatGPT. The chatbot which, let’s remember, in recent weeks has demonstrated its ability to compose poems and songs (moreover getting scolded by the great singer-songwriter Nick Cave).
So what is unique about ChatGPT, which is so scary and which at the same time attracts all of us?
ChatGPT and the plagiarism problem
We can approach the answer starting from a problem, which is one of the reasons why the chatbot has been banned in many schools and universities: is paraphrasing a text with an artificial intelligence a crime?
The legal consultancy site La legge per tutti answers for us, according to which “ChatGPT is unable – at least for the moment – to create an original work, albeit inspired by another. ChatGPT, when it starts from another text – which is provided by the user himself asking for a summary or a paraphrase – creates a real copyright infringement.”
Wikipedia says ChatGPT
But what interests us, in our reasoning, is not so much the legal aspect as the, so to speak, intimate one.
We have just seen how ChatGPT violates the copyright of the texts, because it is not able to rework them by detaching itself sufficiently from the original.
Here’s the thing: the action of infringing. We will be back.
The fear is that the OpenAI chatbot could generate a Copernican revolution in the world of communication, information retrieval, study and research.
But why, at the time of Wikipedia’s release, did many perhaps smile at the scarce scientific nature of some entries, but no one took the fright that many are taking today with ChatGPT? And no one has imposed bans on Wikipedia?
The real fear of ChatGPT
ChatGPT, it is true, can plagiarize other people’s texts, produce fake news and even malicious viruses.
But what, after all, is frightening and attractive about ChatGPT, must once again be brought back to the concept of violation. It is we, for the first time, who feel violated in what is most intimate and peculiar to ussomething that makes our chest swell and deludes us that we are the masters of the world. We are, of course, talking about consciousness. That is, of the possibility of making autonomous decisions based on a moral sense.
Therefore, when in dialogue with a machine it seems to us that it too can reason autonomously, it seems to us that we are dealing with a sister and at the same time an antagonist.
ChatGPT is us
But there is a further step to take, to complete the reasoning. It goes without saying that chatbots are “fed” on the information that we humans give them. We can therefore decide how to educate them, depending on what we show them. A car is neither good nor bad, and assuming it only serves to relieve us of responsibility.
This is where the fear and perverse attraction towards ChatGPT come from. OpenAI’s conversational chatbot can be modeled to look remarkably similar to us humans. And we humans, as we know, are capable of the highest gestures and the most vile abominations.
Leave a Reply
View Comments