WormGPT, il "gemello cattivo" di ChatGPT usato per il cybercrimine thumbnail

WormGPT, the “evil twin” of ChatGPT used for cybercrime

The world of artificial intelligence is a fascinating terrain full of potential, but also full of pitfalls. We are not talking about the possible implications of technology for the work of the future, but of increasingly sophisticated cyber threats – and already present. One such threat is WormGPTthe “evil twin” of ChatGPT, which uses AI to carry out hacker attacks of various kinds.

WormGPT, when AI helps cybercrime

WormGPT is the subject of a recent report by security researchers at SlashNext. It is a “black hat” alternative to the well-known service offered by OpenAI: developed by cybercriminals to carry out their misdeeds. Based on the language model GPT-JWormGPT has been specifically trained for illicit purposes.

The modalities are in all similar to those of other generative AIs. WormGPT offers a natural dialogue that follows the human one. This serves in particular to simplify the creation of phishing activities via email and messages on your smartphone. The bot can trick victims with text and media content to push to the infected file downloads or ad hoc fake websites. Or it can simulate an email from your bank or an electricity, gas or internet supplier to steal your credentials.

Russian hackers

Furthermore, fprovides tips and advice to hackers to scam less computer-savvy users. In short: it does everything ChatGPT has accustomed us to, but in a criminal key.

Creative and criminal AI

Even legitimate tools like OpenAI’s ChatGPT and Google’s Bard have, on paper, the potential to create these dangerous texts. Indeed, in the past they have done it, exploiting flaws in the protection systems put in place by companies. But the developers blocked these features, so that cybercriminals had to create an alternative model capable of circumventing the problem.

In this way, cybercriminals can access this service for the change of 60 euros per month or 550 euros for a whole year, as reported by Wired. Modest figures, especially since hackers end up paying them with the money of those who cheat.

While researchers are trying to find solutions for mitigating the potential malicious use of AI tools like ChatGPT, WormGPT represents a dangerous innovation. Its access from some forums for hackers is quite simple and does not require particular computer skills.

generative ai meta cm3leon min

WormGPT, the “evil twin” of ChatGPT, poses a rapidly growing threat to online security in 2023. Specifically designed for malicious purposes, this tool fuels concerns about the misuse of artificial intelligence. Cybersecurity companies will try to respond – perhaps also leveraging the potential of AI.

In the meantime, we must pay the same attention: do not open suspicious emails, do not download attachments you are not sure of and do not click links that we do not know where they may lead. To respond to artificial intelligence, you need some human common sense.

Walker Ronnie is a tech writer who keeps you informed on the latest developments in the world of technology. With a keen interest in all things tech-related, Walker shares insights and updates on new gadgets, innovative advancements, and digital trends. Stay connected with Walker to stay ahead in the ever-evolving world of technology.