The launch of ChatGPT 4, the most advanced version of the popular OpenAI chatbot, opens up possible risks in terms of cybersecurity. This was revealed in a recent report by Check Point Research (CPR), the Threat Intelligence division of Check Point Software Technologies Ltd.
CPR experts have analyzed the new version of the AI, hypothesizing possible malicious scenarios. Specifically, there are five possible applications by cybercriminals, some of which are decidedly alarming.
All the risks of ChatGPT 4
CPR premises that ChatGPT 4 has protections and restrictions, which should theoretically prevent cybercriminals from making it a malicious one. However, experts explain that these restrictions are easily circumvented. Specifically, from an initial analysis of the bot, CPR believes that there are five possible malicious applications:
By taking advantage of the advanced features of artificial intelligence, cybercriminals would be able to generate malware or phishing attacks with just a few clicks. “In some cases, these scenarios allow even inexperienced attackers to create malicious tools,” the experts said. Oded Vanunu, Head of Products Vulnerabilities Research di Check Point Software he has declared:
“ChatGPT 4 can be useful both for good actors, but also for dangerous ones. Well-intentioned users can use ChatGPT to create and distribute code useful to society, but at the same time, attackers can use this AI technology to commit cybercrimes. As AI plays a significant and growing role in cyber attacks and defense, we expect this platform to be leveraged again, and will spend the next few days better understanding how.”
Leave a Reply
View Comments