LLaMA, Meta’s AI language model leaks online

Meta produrrà MicroOLED con SK Hynix e LG Display thumbnail

About two weeks ago, Meta has announced its latest language model, LLaMA. News that suggests that Mark Zuckerberg’s company is investing in the future of artificial intelligence, also as an enabling factor for the metaverse. However, unlike ChatGPT and Bing Chat, the company was keeping them a secret. Or at least he tried, after the recent leak of source code of LLaMA on 4Chan, which could ruin the plans of Meta.

LLaMA, Meta’s language model comes online – with a leak

Il new language model is not a chatbot, as we are used to imagining after testing ChatGPT. It is an open-source package which in Meta’s intent allows “democratizing access” to AI.

But for the moment Meta did not want to make the resource public. Too bad the March 3 a torrent package with the code of LLaMA has arrived online. And now the LLaMA language model is in the hands of people who could use it for illicit purposes.

The risk, as the computer security expert explains Jeffrey Ladish is that many can use this resource for custom spam and advanced phishing scams. This has sparked a debate about sharing research in a time of rapid technological change.

Several AI researchers have already downloaded the undistributed LLaMA package from Meta and confirmed its legitimacy. However, at the moment LLaMA is a crude system that requires technical skills to make it fully functional. Furthermore, you need powerful machines to make it work: it is not the kind of tool that an inexperienced hacker can exploit.

While some experts fear the worst, other developers believe the leak may lead to the development of security measures add-ons for AI systems. Furthermore, it is important to note that other more complex language models have been disseminated on the net without consequences.

This discussion, however, has more to do with AI in general rather than Meta in particular. Worth having one open-source approach, although hackers will also use it? But if we limited access, who would decide how and when to use it? The debate is about research in this area. And in all likelihood, we will continue to talk about it.