Top Chinese research institutions with ties to the People’s Liberation Army have used Meta’s public Llama model to develop an artificial intelligence tool for potential military applications, according to academic papers and analysts.
Top Chinese research institutions with ties to the People’s Liberation Army have used Meta’s public Llama model to develop an artificial intelligence tool for potential military applications, according to academic papers and analysts.
In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the Academy of Military Sciences (AMS), the main research arm of the People’s Liberation Army (PLA), detailed How they used an early version of Meta’s Llama as the basis for what they called “ChatBIT”.
The researchers used the Llama 2 13B large language model (LLM) released by Meta META.O in February 2023, combined with their own parameters to build a military-focused AI tool for collecting and processing intelligence to provide operational decision-making Accurate and reliable information – produced.
ChatBIT has been fine-tuned and “optimized for conversational and question-and-answer tasks in the military domain,” the newspaper said. The study found that it outperformed some other AI models, which were about 90% as capable as OpenAI’s powerful ChatGPT-4. The researchers did not elaborate on how they defined effectiveness or whether the AI models were put to use.
“This is the first time there is substantial evidence that China’s People’s Liberation Army military experts have been systematically studying and trying to harness the power of open source LL.M.s (especially Meta’s LL.M.) for military purposes,” said Sunny Cheung, associate researcher at the institute explain.
Meta has accepted many artificial intelligence models for open release, including Llama. It has imposed restrictions on its use, including requiring the service, which has more than 700 million users, to seek permission from the company.
Its terms also ban the use of models in “military, warfare, nuclear industries or applications, espionage” and other activities subject to U.S. defense export controls, as well as the development of weapons and content intended to “incite and promote violence.”
However, because Meta’s model is public, the company is limited in how it can enforce these regulations.
In response to questions from Reuters, Meta cited its acceptable use policy and said it has measures in place to prevent abuse.
“Any use of our model by the People’s Liberation Army is unauthorized and violates our acceptable use policy,” Meta public policy director Molly Montgomery told Reuters in a phone interview.
The Chinese researchers include Geng Guotong and Li Weiwei of the U.S. Military Scientific Information Research Center and the National Defense Science and Technology Innovation Institute, as well as researchers from Beijing Institute of Technology and Minzu University of China.
The paper states: “In the future, through technological improvement, ChatBIT will not only be used in intelligence analysis, but also explore… strategic planning, simulation training and command decision-making.”
China’s Ministry of National Defense did not respond to a request for comment, nor did any agency or researcher.
Reuters was unable to confirm ChatBIT’s capabilities and computing power, but researchers noted that its model included only 100,000 military conversation records, a relatively small number compared with other LL.M.
“This is just a drop in the ocean compared to most models that are trained with trillions of tokens, so… it does make me question what they actually achieve in terms of different functions,” said Meta’s artificial intelligence researcher, MacCanada. Professor of Computer Science at Gill University.
The research comes as debate rages in U.S. national security and technology circles about whether companies like Meta should make their models public.
U.S. President Joe Biden signed an executive order in October 2023 seeking to regulate the development of artificial intelligence, noting that while innovation can bring substantial benefits, there are also significant safety risks, such as removing safeguards within models.
This week, Washington said it was finalizing rules to limit U.S. investment in Chinese artificial intelligence and other technologies that could threaten national security.
Pentagon spokesman John Supple said that the Department of Defense realizes that the open source model has both advantages and disadvantages, and “we will continue to closely monitor and evaluate the capabilities of our competitors.”
Some observers say China’s progress in developing indigenous artificial intelligence, including the establishment of dozens of research laboratories, has made it difficult to prevent China from closing the technological gap with the United States.
In another academic paper reviewed by Reuters, two researchers at the Aviation Industry Corporation of China (AVIC) – a company that the U.S. owns designated Company with ties to People’s Liberation Army – Describes using Llama 2 for “airborne electronic warfare jamming strategy training.”
China’s use of Western-developed artificial intelligence also extends to domestic security. A June paper described how Llama could be used in “intelligence policing” to process large amounts of data and enhance police decision-making.
State-owned People’s Liberation Army Daily In April, he published a commentary describing how artificial intelligence can help “accelerate the research and development of weapons and equipment, assist in the development of combat simulations, and improve the efficiency of military training.”
“Can you keep them (China) out of the cookie jar? No, I don’t know that you can,” William Hannas, principal analyst at Georgetown University’s Center for Security and Emerging Technologies (CSET), told Reuters How to do it. Papers published in 2023 CSET found 370 Researchers at Chinese institutions have published papers related to general artificial intelligence – helping drive China’s national strategy to lead the world in artificial intelligence by 2030.
“There is so much collaboration going on between China’s best scientists and America’s best AI scientists that they cannot be left out of development,” Hannas added.