OpenAI CEO Sam Altman during a fireside chat organized by SoftBank Asia Ventures in Seoul, South Korea, on Friday, June 9, 2023.
Sung-jun Cho | Bloomberg | Getty Images
OpenAI and Anduril announced a partnership on Wednesday that will allow the defense technology company to deploy advanced artificial intelligence systems for “national security missions.”
This is part of a broader and controversial trend of artificial intelligence companies not only lifting bans on military uses of their products, but also entering into partnerships with defense industry giants and the U.S. Department of Defense.
Last month, humans Amazon– Artificial intelligence startup founded by former OpenAI research director and defense contractor Palantir Announced a partnership with Amazon Web Services to “give U.S. intelligence and defense agencies access to (Anthropic’s) Claude 3 and 3.5 series models on AWS.” This fall, Palantir signed a five-year, $100 million deal US dollars in new contract to expand U.S. military access to its Maven artificial intelligence warfare program.
The OpenAI-Anduril partnership announced Wednesday will “focus on improving the nation’s Counter-Unmanned Aerial System (CUAS) and its ability to detect, assess and respond to potentially lethal aerial threats in real time,” a press release said, adding that ” Anduril and OpenAI will explore how to leverage leading artificial intelligence models to quickly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness.
Anduril, co-founded by Palmer Luckey, did not respond to questions about whether reducing the responsibilities of human operators would reduce the number of people involved in high-stakes war decisions. Luckey founded Oculus VR and sold it to Facebook in 2014.
OpenAI said it is working with Anduril to help human operators make decisions and “protect U.S. military personnel on the ground from drone attacks.” The company said it adheres to the policy in its mission statement that prohibits using its artificial intelligence systems to harm others.
After the news came MicrosoftSupport OpenAI in January Quietly lifts ban on military use of ChatGPT and other artificial intelligence tools, just as it has begun collaborating with the U.S. Department of Defense on artificial intelligence tools, including open source cybersecurity tools.
As of early January, OpenAI’s policy page designated The company does not allow its models to be used in “activities with a high risk of physical harm,” such as weapons development or military and warfare. In mid-January, OpenAI Deleted The military was specifically mentioned, although its policy still states that users should not “use our services to harm themselves or others,” including “developing or using weapons.”
The news follows years of controversy over technology companies developing military technology, highlighted by public concerns among tech workers, particularly those working in artificial intelligence.
After thousands of Google employees protested against the Pentagon’s Project Maven, employees at nearly every tech giant involved in military contracts have expressed concern.
Microsoft employees protest $480 million Army contract to provide augmented reality headsets to soldiers and more than 1,500 Amazon and Google employees signed a letter Protests against a $1.2 billion multi-year contract with the Israeli government and military under which the tech giant will provide cloud computing services, artificial intelligence tools and data centers.
—CNBC’s Morgan Brennan contributed to this report.