According to Federal Bureau of Investigation (FBI), The hackers are using Artificial Intelligence To write malware and spread malicious programs or e-mail di phishing. A confirmation of what many experts already believed: Generative AI helps everyone – including cybercriminals.
Artificial intelligence and hackers, the FBI report
The federal agency has sounded the alarm about it to the use of free and customizable open source AI models. Which allow hackers to create malicious code even without having in-depth knowledge of programming. Traditionally, writing malware required high technical skill. But now, thanks to generative AI, almost anyone can produce a malicious program.
Generative AI is at the heart of these new digital threats. These tools can create code and debug it like an expert programmer would, making malware creation easier, faster, and harder to detect. Code doesn’t always turn out perfect – but for hackers, that’s a relative problem. By focusing on quantity rather than quality, they can attack many users.
This situation worries the FBI, as the Cybercrime is on the rise and hackers are honing their techniques in ever more sophisticated ways.
Malware e IA
According to the FBI, artificial intelligence makes hackers more dangerous. Malware created with AI is often difficult to detect and even more complex to get rid of once it has infected a system. Also, AI makes it possibleand launch attacks on a larger scale, simultaneously affecting numerous systems, increasing the scope and impact of threats.
Another dangerous tool used by scammers is the “deepfake“. Deepfakes consist of videos or photos that are artificially manipulated to look real. This technology can be exploited to impersonate famous people or spread misleading or false information. Through deepfakes, a victim can be tricked into performing specific actionssuch as making payments or disclosing confidential information.
Concern about the harmful effects of AI is rife among governments around the world. The growing rapid growth of this technology makes difficult to design and implement effective protective measures. As a result, some governments have started enlisting the cooperation of major tech companies to address this issue.
In the United States, the White House recently called for a “voluntary engagement” program with the aim of mitigating the risks of AI. This program involves leading technology companies, including Google, Amazon, Meta and Altrand, in formulating policies and protocols to prevent abuse of AI.
The fight against the misuse of AI is a global challenge that requires a coordinated approach and close collaboration between governments, companies and international institutions. Only through concerted action can we ensure safe and responsible use of Artificial Intelligence, protecting our society from increasingly sophisticated and dangerous threats.
Leave a Reply
View Comments