Scale AI CEO Alexandr Wang testified on July 18, 2023 on the House Armed Services Subcommittee on Networking, Information Technology and Innovation on Capitol Hill in Washington.
Jonathan Ernst | Reuters
Scale AI announced a landmark agreement with the Department of Defense on Wednesday, which could be a controversial turning point in the use of artificial intelligence tools in the military.
AI giants provide training data to key AI players like Openai. Google,,,,, Microsoft and YuanAccording to the release, the prototype contract for the Department of Defense “Thunderforge” was awarded to the Department of Defense’s “flagship program” for the use of AI agents for U.S. military planning and operations.
Sources familiar with the situation said it was a multi-million dollar deal and he asked to be anonymous due to the confidential nature of the contract.
The program, led by the Defense Innovation Department, will combine a team of “global technology partners” including Anduril and Microsoft to develop and deploy AI agents. Uses will include modeling and simulation, decision support, proposed action courses and even automated workflows. The launch of the program will start with the Indo-Pacific Command and the US European Command and then expand it to other areas.
According to the release of DIU, “Thunderforge marks a decisive shift toward an AI-powered, data-driven war to ensure that the U.S. military can predict and respond to threats at speed and precise speed.”
“Our AI solutions will transform today’s military operations and modernize American defenses,” CEO Alexandr Wang said in a statement.
Both the scale and DIU highlight speed and how AI will help military forces make faster decisions. Diu mentioned the necessity of eight speeds (or synonyms) at the time of release.
Doug BeckDIU director emphasized “machine speed” in his statement, while Diu Thunderforge program head Bryce Goodman said there is currently “a fundamental mismatch between the speed of modern warfare and our ability to react.”
Although the scale mentions that the program will operate under human supervision, DIU does not emphasize this.
Ai-Military Partnership
Scale’s announcement is part of a broader trend in AI companies not only banning bans in terms of military use of their products, but also has a partnership with defense industry giants and the Department of Defense.
November, human, Amazon– The AI startup behind was founded by former Onene Research Director and Defense Contractor Palantir Announces a partnership with Amazon Web Services to “a family of models (AWS) for the U.S. Intelligence and Defense Agency Access (AWS) (AWS). This fall, Palantir signed a new five-year, up to $100 million contract to expand U.S. military access to its Maven AI Warfare program.
Last December, OpenAI and Anduril announced a partnership that allows defense technology companies to deploy advanced AI systems for “national security missions.”
According to a press release at the time, the OpenAI Anduril Partnership focused on “improving the National Anti-Unmanned Aircraft System (CUAS) and its ability to detect, evaluate and respond to potentially deadly air threats.”
Anduril, co-founded by Palmer Luckey, did not answer CNBC’s question at the time whether reducing responsibility for human operators would translate into fewer humans in the cycle under the decision of high-risk wars.
Openai told CNBC at the time that it was based on the policy in a mission statement that prohibited the use of AI systems to harm others.
But according to some industry professionals, this is easier said than done.
“The problem is that you have no control over how the technology is actually used – if it is not the current usage, then once you have shared the technology, it must be used for a long time,” Margaret Mitchell, a researcher and chief ethics scientist at Hugging Face, told CNBC in an interview. “So I’m curious about how the company actually realizes – are they someone with security permissions actually checking the usage and verifying it’s within limits without direct harm?”
Mitchell said AI startups and OpenAI rival Hugging Face have previously rejected military contracts, including contracts that do not include direct injuries. She said the team “understood how direct injury is one step away from direct injury”, adding: “Even if it seems harmless, it’s obviously part of a surveillance process.”
Scale AI CEO Alexandr Wang spoke on January 23, 2025 at CNBC’s Squawk Box outside the World Economic Forum in Davos, Switzerland.
CNBC
Even summarizing social media posts can be seen as a straightforward and harmful step away, Mitchell said, as these summaries can be used to potentially identify and eliminate enemy combatants.
“Is it really better if you stay away from the damage and help spread the damage in just one step?” Mitchell said. I think it’s an arbitrary line in the sand, which is actually a good moral situation for the PR and the morale of the employees of the company… You can tell you the Ministry of Defense, ‘We will provide you with this technology, please don’t hurt people with this, they can say, ‘They can also say: ‘We won’t be consistent with our moral values, but they can’t guarantee its scope, and you won’t have that scope.
Mitchell calls it “a word game that provides some kind of acceptability… or non-violent veneer.”
Military Pivot of Technology
Google deleted a warranty of waiver in February Using AI for potentially harmful applicationsAccording to the company’s latest “AI principles” such as weapons and surveillance. This is compared to previous versions, Google said it would not pursue “the primary purpose or implementation of weapons or other technologies that cause or directly promote harm to a person” and “technology that collects or uses information to violate internationally recognized norms surveillance.”
January 2024 Microsoft– Openai behind Quietly lifted ban on military use of Chatgpt And its other AI tools, like it has started working with AI tools (including open source cybersecurity tools).
Before this, OpenAI’s policy page Specified The company does not allow its models to use “activity with a high risk of damage”, such as weapon development or military and war. But in the updated language, Openai delete Although its policies still state that users should not “use our services to harm themselves or others”, including “develop or use weapons.”
News about changes to military partnerships and mission statements is a year-long controversy about technology companies developing technologies for military purposes, which is the public concern of skilled workers, especially those engaged in AI.
Almost all employees of the technology giant involved in military contracts have expressed concerns after thousands of Google employees protested the company’s involvement in the Pentagon project Maven, which will use Google AI to analyze drone surveillance videos.
Palantir later took over the contract.
Microsoft employees protest $480 million Army contracts to provide soldiers with augmented reality headsets, as well as more than 1,500 Amazon and Google workers Signed a letter Protests on a common contract with the Israeli government and military, under which the tech giant will provide cloud computing services, AI tools and data centers.
“There is always a swing swing,” Mitchell said. “We are swinging now, and employees have less say in the technology company than they were a few years ago, so it’s like a buyer and seller market… Now, the interests of the company are much heavier than the interests of a single employee.”