Until recently, OpenAI wouldn’t allow its models to be used for activity that had “high risk of physical harm, including weapons development, military and warfare." Now it has removed the words ‘military’ and ‘warfare’. Is this a routine update, or should we be worried? Until recently, OpenAI had explicitly banned the use of its models for weapons development and military and warfare.
But on 10 January, it updated its policy. It continues to prevent the use of its service “to harm yourself or others", citing “develop or use weapons" as an example, but has removed the words ‘military’ and ‘warfare’, as first pointed out by The Intercept.
With OpenAI already working with the Pentagon on a number of projects including cybersecurity initiatives, there’s concern in some quarters that this softening of stance could result in the misuse of GPT-4 and ChatGPT for military and warfare. Microsoft Corp., OpenAI’s biggest investor, already has software contracts with the armed forces and other government branches in the US.
In September 2023, the US Defense Advanced Research Projects Agency (DARPA) said it would collaborate with Anthropic, Google, Microsoft and OpenAI to help develop “state-of-the-art cybersecurity systems". Anna Makanju, OpenAI’s vice-president of global affairs, said in an interview at the World Economic Forum in Davos on 16 January that the company had initiated talks with the US government about methods to assist in preventing veteran suicides.
OpenAI told TechCrunch that while it does not allow its platform to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, “national security use cases that align with our mission", such as its
. Read more on livemint.com