The artificial intelligence was just missing the word, and now it has arrived: with the new update ChatGPT not only talks, but also looks at photos. The ability to express oneself with phonetic language is just the latest in a long series of features that OpenAI is implementing in its plans a payment. Yes, because the word and the possibility of contemplating (and responding to) images will be reserved only for Plus and Enterprise plan subscribers.
ChatGPT speaks and listens, but how?
The ability to speak comes from the implementation of t technologiesext-to-speach e speach-to-text. On one side the user will be able to ask vocal questions to the AI, which will process the audio into text to understand its meaning. On the other hand, artificial intelligence will do the opposite: he will prepare a written response which he will then pronounce orally. All in just a few seconds.
When the new feature is introduced the ChatGPT word will be in English only, so you will have to speak to him (and expect answers) only in English. But OpenAI has promised to expand the international AI vocabulary in the near future.
It will also be able to recognize images
In addition to this, OpenAI has announced that ChatGPT will be able to assimilate images and photos. The user can then ask a specific question by attaching a specific image. Example: by uploading a photo of a flower, it will be possible to ask the chatbot for an opinion on the floral species represented. Of course, since ChatGPT is not always 100% reliable, don’t expect necessarily correct answers.
Both new features will have to be activated from the chatbot settings, and will not be available by default.