Thursday, December 26, 2024
HomeUS NewsOpenAI and Anthropic agree to let U.S. Institute for Artificial Intelligence Security...

OpenAI and Anthropic agree to let U.S. Institute for Artificial Intelligence Security test models | Real Time Headlines

Jakub Bolzycki | Noor Photos | Getty Images

As industry concerns over the safety and ethics of artificial intelligence grow, two of the most valuable artificial intelligence startups, OpenAI and Anthropic, have agreed to let the American Artificial Intelligence Safety Institute test their new models before releasing them to the public.

The institute, part of the National Institute of Standards and Technology (NIST) Department of Commerce, said in a statement Press release It will provide “access to each company’s major new models before and after their public release.”

The organization is launching the U.S. government’s response to the Biden-Harris administration first ever executive order exist AI In October 2023, call for new safety assessments, equity and civil rights guidance, and research on the impact of artificial intelligence on the labor market.

“We are pleased to have reached an agreement with the National Institute for Safety in Artificial Intelligence to conduct pre-release testing of our future models,” OpenAI CEO Sam Altman wrote in a statement. postal OpenAI also confirmed to CNBC on Thursday that the company’s number of weekly active users has doubled over the past year to 200 million. Axios was the first to report this number.

The news comes just a day after reports surfaced that OpenAI was in talks for a funding round that would value the company at Over $100 billion. Thrive Capital is the lead investor in this round and will invest $1 billion, according to a person familiar with the matter who asked not to be identified because the details are confidential.

Anthropic was founded by former OpenAI research executives and employees and was most recently valued at $18.4 billion. Anthropic Count Amazon As a leading investor, OpenAI is strongly supported by Microsoft.

According to reports on Thursday, the agreement between the government, OpenAI and Anthropic “will facilitate collaborative research on how to assess capability and security risks and ways to mitigate those risks.” release.

“We strongly support the mission of the National Institute for Artificial Intelligence Security and look forward to working together to provide security best practices and standards for artificial intelligence models,” Jason Kwon, chief strategy officer at OpenAI, told CNBC in a statement.

Anthropic co-founder Jack Clark said the company’s “partnership with the American Artificial Intelligence Security Institute leverages their extensive expertise to rigorously test our models before widespread deployment” and “enhances our ability to identify and mitigate risks, Promote responsible artificial intelligence development.

Multiple AI developers and researchers expressed concern On safety and ethics in the increasingly profitable artificial intelligence industry. Current and former OpenAI employees Publish an open letter June 4, describing potential problems with the rapid development of artificial intelligence and the lack of oversight and whistleblower protections.

“AI companies have strong financial incentives to avoid effective oversight, and we believe bespoke corporate governance structures are insufficient to change this,” they wrote. They added that AI companies “currently have only weak obligations to share them with governments.” information without sharing it with civil society” and cannot “rely on them to share this information voluntarily”.

Days after the letter was published, a source familiar with the situation confirmed to CNBC that the FTC and Justice Department were investigating the matter. Antitrust investigation will be launched Enter OpenAI, Microsoft and NVIDIA. FTC Chairman Lina Khan descriptive Her agency’s action is to “conduct a market survey of investments and partnerships that are forming between artificial intelligence developers and major cloud service providers.”

On Wednesday, California lawmakers Passed A hot AI security bill is headed to Gov. Gavin Newsom’s desk. Newsom, a Democrat, has until Sept. 30 to decide whether to veto the legislation or sign it into law. The bill has been met with skepticism by some.

watch: Google, OpenAI and other companies oppose California’s artificial intelligence safety bill

Google, OpenAI and other companies oppose California's artificial intelligence safety bill
RELATED ARTICLES

Most Popular

Recent Comments