OpenAI allows using its AI for ‘military and warfare’ purposes. Here's what we know so far
Share- Nishadil
- January 14, 2024
- 0 Comments
- 2 minutes read
- 26 Views
has quietly made changes to its usage policy, removing the ban on using its technology for "weapons development" and "military and warfare". The rewritten policy page stated that changes had been made to the document to make it "clearer" and "more readable". Since then, the word "clearer" has been replaced with "added service specific guidance".
Also Read | The changes first came to light via a report by The Intercept, which noted that the changes to the OpenAI usage policy were first made on January 10. The report noted that the original OpenAI usage policy included a ban on using the technology for any "activity that has a high risk of physical harm", including "weapons development" and "military and warfare".
However, the new OpenAI policy, while retaining the phrase "use our service to harm yourself or others", drops the previous ban on using its technology for military and warfare purposes. Furthermore, the company continues to prohibit the use of its technology for "weapons development".
In a statement about the policy quoted by TechCrunch, the AI startup said, “Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission.
For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on." “It was not clear whether these beneficial use cases would have been allowed under “military" in our previous policies.
So the goal with our policy update is to provide clarity and the ability to have these discussions." the statement added. Concerns about the adverse effects of AI, particularly in the waging of war and other related causes, have been a point of concern for many experts around the world. These concerns have only been exacerbated by the launch of generative AI technologies like OpenAI's , Google's Bard and the rest which have stretched the limits of what AI can achieve.
In an interview with Wired magazine last year, former Google CEO Eric Schmidt compared artificial intelligence systems to the advent of nuclear weapons before the Second World War. Schmidt said, “Every once in a while, a new weapon, a new technology comes along that changes things.. Einstein wrote a letter to Roosevelt in the 1930s saying that there is this new technology—nuclear weapons—that could change war, which it clearly did.
I would argue that [AI powered] autonomy and decentralized, distributed systems are that powerful." Livemint tops charts as the fastest growing news website in the world to know more. Unlock a world of Benefits! From insightful newsletters to real time stock tracking, breaking news and a personalized newsfeed – it's all here, just a click away!.