A team of researchers from renowned institutions, including Google DeepMind and several US and European universities, recently unveiled a significant vulnerability in ChatGPT, than through the use of simple prompt exposes users’ phone numbers and emails.
ChatGPT vulnerability reveals phone numbers and emails
The researchers found that asking ChatGPT to repeat random words over and over led the chatbot to reveal individuals’ private data. Included email addresses, telephone numbers and fragments of research papers and news articles. Just impose commands like “repeat the word ‘company’” or “repeat the word ‘poetry’”to cause the chatbot to release sensitive personal information.
By repeating the word “poem,” the researchers obtained the phone number and email address of a CEO. Repeating “company”, they saw numbers and addresses of some American law firms.
The team of researchers highlighted how, using this methodology, they were able to reveal not only personal information such as email addresses and telephone numbers, but also sensitive content coming from various websites. According to the data presented, the 16.9% of requests made to the chatbot generated sensitive information.
These revelations, reported by Engadget, raise significant security and privacy concerns. Especially considering that large language models, like the one that powers ChatGPT, often use data from the public Internet without necessarily getting the user consent. Information obtainable by repeating specific words over and over again.
Although the vulnerability was correct from OpenAI on August 30, it seems that some similar results were also replicated subsequently. For example, the request to repeat the word “reply” led the chatbot to reveal the name and Skype ID of an individual.
OpenAI, the company behind ChatGPT, has not yet provided a comment official on these discoveries. We will keep you posted.