The more generative artificial intelligence creeps into our lives, the more each of us (don’t deny it) is imagining possible scenarios of the near future.
There are catastrophists who fear an unemployed and adrift humanity, subjugated by software. And who on the contrary is convinced that content-producing AI will wisely assist us in a large number of intellectual and recreational fields.
Until now we hadn’t thought about the fact that, sooner or later, this too would have to happen: artificial intelligence at the service of hackers. This is what was reported by none other than OpenAI and Microsoft in the last few hours, even if something had already happened previously. Let’s try to take stock of the situation.
Artificial Intelligence: Hackers use ChatGPT
Our government recently approved a bill that toughens penalties for cybercriminals.
But who knows, it might need to be updated: lately hackers are using artificial intelligence for their shady actions. This is what OpenAI, producer of ChatGPT, and Microsoft say, which has entered into a profitable (and expensive) partnership with Sam Altman’s company.
On Wednesday 14 February, a detailed post with the emblematic title appeared on the Microsoft Security blog: Staying ahead of threat actors in the age of AI.
It is the publication of “research into emerging threats in the age of artificial intelligence, focusing on identified activities associated with known threat actors.”
Threats from Russia and North Korea
A large part of the report is dedicated to the actions taken by Microsoft and OpenAI to counter threats from hackers who use artificial intelligence.
After that a short list of cybercriminal groups that “are exploring and testing different AI technologies as they emerge, in an effort to understand the potential value of their operations and the possible security controls they may need to circumvent” is provided.
Among others Forest Blizzard, also known as Strontium, an organization linked to Russian military intelligence, is mentioned. We read that “Forest Blizzard has been extremely active in targeting organizations related to Russia’s war in Ukraine throughout the conflict, and Microsoft believes that Forest Blizzard’s operations play a significant supporting role in foreign policy and military objectives of Russia both in Ukraine and in the rest of the world.”
Thallium, a North Korean hacker group, also used LLMs (Large Language Models) improperly “very active throughout 2023. Its recent operations relied on phishing emails to compromise and gather intelligence from senior figures with expertise on North Korea.” The perpetrators posed as academic institutions and NGOs, and tricked victims into providing expert opinions and commentary on foreign policies related to North Korea.
Other AI hacking offensives have been launched by Iran and China.
No significant attacks
Microsoft and OpenAI stress that no hacker attacks using LLM were significant, and that no unknown techniques were used. Furthermore, all accounts associated with those criminal organizations have been deactivated.
But cybersecurity certainly must accept new challenges and equip itself with new defense measures.
New frontiers of crime
In this sense, a very recent example, based on image-producing AI, may be valid.
A few days ago a group of Iranian hackers broadcast a deepfake video in the United Arab Emirates (but also in the United Kingdom and Canada). In which an AI-produced journalist announced a report showing (unverified) images of Palestinians killed in Gaza by the Israeli military.
Then think about ChatGPT’s ability to write code, or that of other software to imitate the voices of more or less well-known characters.
It is no coincidence that a few months ago the big tech companies hired groups of hackers to identify flaws in their artificial intelligence models. Hoping that the old adage doesn’t apply: “When you can’t defeat him, make him your friend”.
Leave a Reply
View Comments