Every day portals and specialized magazines fill their pages with articles on Microsoft Bing.
Or more precisely on the next version of the search engine, which will be based on ChatGPT, the much talked about creature of OpenAI.
The ever-new arguments about a technology undoubtedly destined to revolutionize our way of relating to the Net (and perhaps even to reality) fall into two categories.
One, of course, has to do with ethics. We are wondering about the limits and risks of conversational chatbots, difficult to educate in a complete way without them learning even the less noble things of human knowledge, so to speak (and we will come back to this).
Then there is a more practical side: the forthcoming Microsoft Bing is certainly astounding, but not free from inaccuracies.
Are we in 2022?
One of the examples of gross Microsoft Bing error we reported in a recent article. The engine based on ChatGPT, in a conversation, was convinced that we were in 2022. E woe to contradict him, because in this case the chatbot would go on a rampage and provide answers that calling it undiplomatic is an understatement.
So how can we minimize this problem?
Microsoft limita Bing
On Friday 17 February, a note appeared on the Microsoft Bing blog. Brief, but essential.
Because it explains how the Redmond-based company will take steps to limit incorrect or inaccurate responses from its search engine.
Let’s read a premise: “Very long chat sessions can confuse the underlying chat model in the new Bing.”
From this, the decision: “Starting today, the chat experience will be limited to 50 chat rounds per day and 5 chat rounds per session. A turn is a conversation exchange that contains both a user question and a Bing answer.”
The reason
The company is clever at masking the problem, explaining how the choice derives mainly from the fact that the vast majority of chat sessions have a limited duration.
“The vast majority of you find the answers you were looking for within 5 turns and that only about 1% of chat conversations have more than 50 messages. After a chat session reaches 5 rounds, you will be prompted to start a new topic. At the end of each chat session, the context should be cleared so that the model doesn’t get confused.”
The fact is that the futuristic technology that powers the latest version of Microsoft Bing, or ChatGPT, is still immature. And he gets all the more crabs the longer the conversation goes on.
Beyond the technical problems
Whether this is one of the possible solutions to mitigate Microsoft Bing errors, the other and even more burning question remains. That of the ethical limits of ChatGPT.
In this sense, the words of Tatiana Tommasi, a professor at the Polytechnic of Turin and an expert in artificial intelligence, are balanced. Interviewed by Ansa, Tommasi said among other things: “Artificial intelligence tools have enormous potential, but they are in an experimental phase and need to be perfected, the aim will be personalization.
It is important that they are democratic, affordable for everyone and without monopolies, and it must be clear to the user when they are used.”
More drastic (and similar in content to what Elon Musk recently said on the subject) was the speech by Sam Altman, none other than the CEO of OpenAI. Altman said we may not be far “from creating potentially scary artificial intelligence.” And he added that “regulation will be fundamental”.
Targeted advertising, what could change
The new Microsoft Bing could also adopt new and more subtle ways of serving users targeted advertising.
This was written by Reuters, in an article published on Friday 17 February.
Already now Bing integrates advertisements. But in the near future the banners we will see will be selected according to the questions we ask the search engine.
And we still don’t know if the advertisements will appear among the Microsoft Bing responses, confused among the various others, or if they will be contained in boxes with graphic autonomy.
If it were made explicit when it came to ads, nothing different would happen from what already happens today with Google (whose ads, displayed as such, appear among the first results). Conversely, things would be a little more dangerous.
Leave a Reply
View Comments