Generative AI can do so many things – of all of them, the most obvious is the ability to stay at the center of the conversation. ChatGPT and the like manage to get this media and cultural attention because they touch on different topics, from the possible jobs they could support or replace to the issue of privacy and security. And it is precisely by reasoning about privacy and security that the European Union is writing new rules to regulate generative AI – but until there are new rules, we will have to manage this phenomenon with the existing rules. Even those not designed for artificial intelligence.
Generative AI, the European Union is working on new rules
The phenomenal growth of ChatGPT has raised questions of privacy and security that require proper regulation. The CEO of OpenAI himself, the company that created the intelligent chatbot, has asked the American Congress for clear rules on this technology. The European Union is already working on it and the rules on generative AI that it will decide could become the global reference to address these issues. But it will take some time for these rules to take effect.
“Until there are specific rules, governments must rely on existing laws,” said Massimiliano Cimnaghi, the expert interviewed by Reuters. “If it comes to protecting personal data, [gli Stati] they use privacy laws, if it’s about protecting people’s safety, there are rules that weren’t designed for AI, but they’re still valid.”
In May, the National data protection authorities in Europe have set up a working group to look into issues with ChatGPT. Especially after the Italian Guarantor had blocked the service, accusing OpenAI to violate EU GDPR, a set of privacy rules introduced in 2018.
After introducing measures of age control and data protection of European users, ChatGPT is working again in Italy. A first example of how current rules can already limit the range of generative AI, waiting for ad hoc rules. These tools are known to produce errors, or “hallucinations”, which spread false information with great conviction.
Generative AI and the need for flexible rules
In the first months of use, all of us were able to appreciate the enormous potential of this type of technology. Especially in areas of online research and text composition. If you had used ChatGPT to create one of the paragraphs of this article, perhaps with some modifications to make it coherent, you would hardly be able to tell which one was born from AI (don’t waste time looking for it, it isn’t there). But AI is not free from errors, even very obvious ones.
These mistakes can have serious consequences. If a banking institution or public entity used artificial intelligence to speed decisions, it could wrongfully denying people credit or benefits. The health sector, the financial sector and various other sectors would need precise rules on the use of AI, to avoid enormous risks.
In addition to regulating usage to avoid worrying consequences, there is the question of copyright and privacy. The data used to train models and the content they generate may infringe the copyright of human authors. And use data obtainable online, but personal (such as those of social networks).
What the AI Act provides
The new rules on Artificial Intelligence will be collected in what the European Union calls AI Act. Among the proposals under consideration by the EU, there are rules that it will impose on companies such as OpenAI by disclose any copyrighted work – such as books or photographs – used to train their models. This could expose them to possible legal action.
Proving copyright infringement, however, will be tricky. Unless AI does not produce some sentence or concept identical to that created by a human author, it will not be plagiarism. But disclosing what materials AI uses for models could allow content creators to get compensation for the work done by AI companies. How authors who write a book pay for the novels of other authors who inspired them.
Generative AI regulations across Europe – but businesses are demanding more speed
Waiting for the AI Act, several countries are regulating generative AI with state regulations. For example, the French data regulator CNIL it has started to “think creatively” about how existing laws might apply to AI, according to Bertrand Pailhesits chief technology officer, interviewed by Reuters.
For example, the Defenseur des Droits (Advocate of Rights) handles discrimination issues, but has no experience in AI bias. So the CNIL is considering implementing some GDPR rules that protect users from automated decision-making processes – for example in recruitment. Even if the French regulator admits that it will take time to do so.
In the UK, the Financial Conduct Authority is confronting theAlan Turing Institute in London and other legal and academic entities, to understand how to deal with bots in finance.
But given the rapid technological evolution, several companies and experts complain of slowness on the part of regulators and states. Balancing consumer protection with innovation takes time, but generative AI technology evolves much faster than the norms. The European framework of the AI Act could give clear rules and change the economic ecosystem of artificial intelligence – just as the GDPR has changed the management of advertising data. But deciding the measures to put in place and then applying them will be a complex task – and perhaps not fast enough.