Wednesday, December 25, 2024
HomeTechnologyChinese regulators begin testing GenAI model for socialist values | Real Time...

Chinese regulators begin testing GenAI model for socialist values | Real Time Headlines

Digital code and Chinese flag representing China’s cyber security.

Anton Patrus | Moment | Getty Images

China’s artificial intelligence companies are undergoing government scrutiny of their large-scale language models, aiming to ensure they “embody core socialist values,” according to one company. Report The Financial Times reports.

The review, conducted by the Cyberspace Administration of China (CAC), the Chinese government’s main internet regulator, involves players from a variety of sectors, including ByteDance and Alibaba to small startups.

According to the Financial Times, local Cyberspace Administration officials will test artificial intelligence models’ answers to a variety of questions, many of which relate to politically sensitive topics and Chinese President Xi Jinping. The model’s training materials and safety procedures will also be reviewed.

An anonymous source from an artificial intelligence company in Hangzhou told the Financial Times that their model failed the first round of testing for unknown reasons. They said in the report that it took months of “guessing and tweaking” to get it through the second time.

The Cyberspace Administration of China’s latest move shows that Beijing is walking a tightrope between the two. Catching up with America on GenAI At the same time, we also pay close attention to the development of technology to ensure that the content generated by artificial intelligence follows its Strict Internet censorship policy.

Analysts say 2024 is the “year of small models” for Chinese artificial intelligence companies

The country was one of the first to finalize The rules that generate artificial intelligence Last year, it included requiring artificial intelligence services to abide by “core socialist values” and not produce “illegal” content.

Multiple engineers and industry insiders told the Financial Times that meeting censorship policies requires “security filtering” and is complicated by the fact that Chinese law masters are still trained on a large amount of English content.

The report said filtering was accomplished by removing “questionable information” from AI model training data and then creating a database of sensitive words and phrases.

The rules reportedly resulted in the country’s most popular chatbot regularly refusing to answer questions about sensitive topics, such as the 1989 Tiananmen Square protests.

However, during CAC testing, there is a limit to the number of questions that LLM can directly reject, so the model needs to be able to produce “politically correct answers” to sensitive questions.

An artificial intelligence expert working on chatbots in China told the Financial Times that it was difficult to stop LLMs from generating all potentially harmful content, so they built an additional layer on the system that replaced questionable answers on the fly.

Regulations and U.S. sanctions limiting access to chips used to train LL.M.s put Chinese companies in a difficult position Launch your own service similar to ChatGPT. However, China dominates global competition Generative artificial intelligence patents.

Read the full FT report

RELATED ARTICLES

Most Popular

Recent Comments