Ora ChatGPT può essere usata per scopi militari thumbnail

ChatGPT can now be used for military purposes

We won’t repeat how much the generative artificial intelligence of software like ChatGPT, launched in November 2022, has changed and is changing our lives.

As happens at least in the early stages with all disruptive innovations, we are learning about the enormous potential benefits but also all the risks of generative AI. An issue to be resolved, for example, is that of algorithm training. And the problem hit the headlines after the New York Times sued OpenAI and Microsoft, accused of using copyrighted material (and there was no shortage of prompt response from the two companies).

Then there is the matter of potential perverse uses of chatbots that produce text and images. Think (to take just one example among many possible) of deepfake, with which the faces of several famous actresses were superimposed onto short porn videos.

But with the latest update to OpenAI’s policy, much darker scenarios could arise.

ChatGPT openai 1

ChatGPT and military purposes

In the use policy for its AI software published in March 2023 by OpenAI, a passage explicitly stated that “military or war-related use” of ChatGPT was prohibited.

Then something happened. And that something coincides with the opening of the ChatGPT store. The company changed the policy on January 10, officially to make it more streamlined and easier to use, and to “offer more specific guidelines for the service.”

Niko Felix, spokesperson for OpenAI, declared: “We tried to create a set of universal principles that were easy to remember and apply.”

However, reading the new text one discovers that that important passage about the fact that ChatGPT cannot be used for military purposes has disappeared. Or rather, in its place there now appears a sentence explaining how the software will not contribute to “any action that may cause damage to third parties” or for “the development and use of weapons”.

OpenAI and collaboration with Defense

Another spokesperson for Sam Altman’s company then said that “our policy does not allow us to use our tools to develop weapons, surveil communications, injure other people or destroy property. However, there are national security use cases that align with our mission.”

And, to dispel any doubts, the spokesperson clarified that OpenAI is already collaborating with Defense. Specifically, with Darpa, a government agency of the United States Department of Defense that develops new technologies for military use.

But how could ChatGPT’s AI be bent to military purposes, excluding the production of weapons?

ChatGPT, war aims and the risk of hallucinations

Any use of ChatGPT for war purposes would therefore not concern the development of armaments. But it could perhaps concern, for example, the drafting of textual documents. And given the known risk of chatbot “hallucinations”. (i.e. the possibility that the software responds in a completely inconsistent manner with respect to the question asked, or that it invents inaccurate if not non-existent information), the fears of using ChatGPT for military purposes are not illegitimate.

As cybersecurity expert Heidy Khlaaf explains: “Given the known cases of hallucinations present in large language models and their general lack of accuracy, use in warfare can only lead to inaccurate and distorted operations due to the biases that risk worsening collateral damage and civilian casualties.”

Words and deeds

Sarah Myers, Managing Director of the AI ​​Now Institute, was interviewed by The Intercept, which first broke the news of the change to OpenAI’s policy.

Myers explained that “given the use of AI systems to target civilians in Gaza, now is a very sensitive time to remove those words from OpenAI’s service policies.”

On the other hand, as already mentioned, OpenAI has made it known that the new version of the document “aims to create a series of universal principles that are easy to remember and apply, and a principle such as ‘do no harm to others’ can be understood more easily than a sentence in which we openly mention weapons and the war industry”.

Beyond the different points of view on the words, there are the facts. In short, it remains to be understood (and we hope that this will happen as late as possible) what consequences the entry of artificial intelligence into the war industry could have.

Walker Ronnie is a tech writer who keeps you informed on the latest developments in the world of technology. With a keen interest in all things tech-related, Walker shares insights and updates on new gadgets, innovative advancements, and digital trends. Stay connected with Walker to stay ahead in the ever-evolving world of technology.