Delhi | 25°C (windy) | Air: 185%

OpenAI’s policy update signals for the future of AI and military

  • Nishadil
  • January 14, 2024
  • 0 Comments
  • 3 minutes read
  • 7 Views
OpenAI’s policy update signals for the future of AI and military

The leading AI research company, OpenAI, has significantly changed its usage policies, allowing more flexibility for military applications of its technology. The company announced the update on January 10 with little fanfare or explanation. Previously, OpenAI had a strict ban on using its technology for any "activity that has high risk of physical harm, including" "weapons development" and "military and warfare." This would prevent any government or military agency from using OpenAI's services for defense or security purposes.

However, the new policy has removed the general ban on "military and warfare" use. Instead, it has listed some specific examples of prohibited use cases, such as "develop or use weapons" or "harm yourself or others." A spokesperson for OpenAI told Business Insider that the company wanted to create a set of universal principles that are easy to remember and apply, especially as its tools are now widely used by everyday users who can also create their customized versions of ChatGPT, called "GPTs." OpenAI launched its GPT Store on January 10, a platform for users to share and explore different GPTs.

The spokesperson also said that the new usage policy includes principles like "Don't harm others," which are broad yet relevant in various contexts, and that the company will continue to monitor and enforce its policies. The change in OpenAI's policy could have significant implications for the future of AI and its role in military and security domains.

Some AI experts have expressed concern that the new policy is too vague and does not address the ethical and social issues that arise from using AI for warfare or violence. For example, the Israeli military has recently claimed that it used AI to identify and strike targets in Gaza, raising questions about the accountability and accuracy of such systems.

Sarah Myers West, managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission, told The that the language in the policy is unclear and leaves room for interpretation and abuse. She also questioned how OpenAI intends to enforce its policy and prevent misuse of its technology.

On the other hand, the policy update could also open up opportunities for OpenAI to collaborate with the military and government agencies on projects that align with its mission, ensuring that AI is used for good and benefits humanity. A spokesperson for OpenAI told BI that some national security use cases are consistent with the company's vision and that the policy change was partly motivated by that.

For instance, OpenAI is already working with the Defense Advanced Research Projects Agency (DARPA) to develop new cybersecurity tools to protect open source software essential for critical infrastructure and industry. .The policy change by OpenAI comes when AI becomes more influential and pervasive in various aspects of society and the economy.

In a with Sam Altman, the CEO of OpenAI, we discussed AI's future directions and challenges, such as multimodality, reasoning, reliability, and personalization. They also stressed the need for a global regulatory body for AI, similar to the International Atomic Energy Agency (IAEA), to ensure that AI is used responsibly and ethically1.

Other AI leaders, such as Tesla CEO Elon Musk, Meta CEO Mark Zuckerberg, and NVIDIA CEO Jensen Huang, have also expressed their views on the regulation and impact of AI. Some AI experts, such as Geoffrey Hinton and Yoshua Bengio, have signed an for a 6 month pause in training AI systems more powerful than OpenAI's GPT 4, citing the potential risks of superintelligence.

Superintelligence is a hypothetical scenario where AI surpasses human intelligence in all domains and becomes uncontrollable and unpredictable. As AI becomes more advanced and ubiquitous, the debate on its regulation and impact will continue to shape humanity's and society's future. OpenAI's policy change reflects the complexity and diversity of the issues involved and the need for a collaborative and proactive approach to address them..