Artificial intelligence, especially generative intelligence, is increasingly entering our daily lives.
And, as we approach the first anniversary of ChatGPT (launched on November 30, 2022), it is filling the news with the most varied news. Think of the intricate story of the removal and then very rapid reinstatement of Sam Altman at the helm of OpenAI. Or the very recent launch – by perhaps its biggest competitor, Anthropic – of the chatbot Claude 2.1. Or, again, the opening of an investigation by our Privacy Guarantor to clarify how data is collected to train artificial intelligence algorithms.
But perhaps the most important news is the seemingly minor one: photos generated by AI will be excluded from the famous World Press Photo Contest. Although perhaps the exclusion criterion passes through unclear and arbitrary rules.
AI guidelines: Italy signs document with 18 other countries
World Press Photo’s decision, while questionable in form, is central in content. Why for the near future of artificial intelligence (and ours) it will be essential to have certain, unambiguous rules that are neither too restrictive nor too libertarian.
For this reason, 18 countries, including Italy, have presented a document containing guidelines for the safe development of AI systems.
The document, 20 pages long, is called exactly that, Guidelines for secure AI system development.
The document with the guidelines on AI, signed by 23 agencies from 18 countries, was made public on Monday 27 November.
The countries involved are Australia, Canada, Chile, South Korea, Estonia, France, Germany, Japan, Israel, Italy, Nigeria, Norway, New Zealand, Poland, the United Kingdom, the Czech Republic, Singapore and the United States.
The guidelines on AI arise from the first international conference on the topic, the AI Safety Summit, which was held on November 1st and 2nd in the United Kingdom.
The aim of the document is to become a first point of reference for developers, so that they can work on (and benefit from) responsible, ethical and safe artificial intelligence.
The AI guidelines contained in the document are non-binding. But they want to serve as a compass for the ethical and responsible development and use of artificial intelligence.
The key word is security, or rather cybersecuritywhich must supervise every moment, from design to development to use, also for obvious reasons linked to the increasingly widespread phenomenon of hacking actions.
The four key areas
The AI guidance document is divided into four key areas: secure design, secure development, secure implementation, and secure operation and maintenance.
The first area contains guidelines that apply to the AI design phase, where understanding risks and threats is critical.
The second area includes guidelines that apply to the AI development phase, “including supply chain security, documentation and management of resources and technical debt”.
The third area contains guidelines that apply precisely to the implementation phase of the AI system, including the protection of the infrastructure from compromise or data loss, but also to responsible release.
Finally, the fourth area, with guidelines that apply to the safe operation and maintenance phase of the AI, in which the system is monitored, including its updates and information sharing.
Bruno Frattasi, director general of the National Cybersecurity Agency (ACN), expressed his opinion on the international document with the guidelines on AI.
Frattasi said: “Artificial Intelligence is a challenge that the Agency does not want and cannot avoid. This is why we have joined this initiative with conviction.
It is a challenge that can only be won together: we must make available the best intellectual and instrumental energies of our country and of all the other countries that are preparing to face, starting from the next Italian-led G7, this highly demanding undertaking. challenging for all humanity.”