OpenAI CEO Sam Altman speaks at the Microsoft Build conference at Microsoft headquarters in Redmond, Washington, on May 21, 2024.
Jason Redmond | AFP | Getty Images
OpenAI said on Monday that the safety and security committee it established in May to handle disputes over safety procedures will become an independent board oversight committee.
The group will be chaired by Zico Kolter, chair of the Machine Learning Department at Carnegie Mellon University’s School of Computer Science. Other members include Adam D’Angelo, OpenAI board member, Quora co-founder, former National Security Agency director and board member Paul Nakasoneand Nicole SeligmanFormer Executive Vice President of Sony.
The company said the committee will oversee “the security processes that guide the deployment and development of OpenAI models.” It recently concluded a 90-day review to assess OpenAI’s processes and safeguards before making recommendations to the board. OpenAI is releasing the team’s findings to the public blog post.
Open artificial intelligence Microsoft-ChatGPT behind-the-scenes support activation and Search GPTThe company is currently seeking a round of financing that would value the company at more than $150 billion, according to people familiar with the matter. Thrive Capital is Lead this round and plans to invest US$1 billion, and tiger global Also planning to join. Microsoft, NVIDIA and apple yes It is said We are also discussing investment matters.
The committee’s five key recommendations include the need to establish independent security governance, strengthen security measures, make OpenAI’s work transparent, cooperate with external organizations; and unify the company’s security framework.
Last week, OpenAI released o1, a preview version of its new artificial intelligence model focused on reasoning and “solving hard problems.” The company said the committee “reviewed the security standards used by OpenAI to evaluate OpenAI o1’s suitability for release” as well as the results of the security assessment.
The committee will “work with the full board to provide oversight of the release of the model, including the authority to delay the release until safety concerns are resolved.”
Although OpenAI has been in rapid growth mode since the launch of ChatGPT at the end of 2022, it also Controversial and Senior staff leavesome current and former employees worry the company is growing too fast to operate safely.
In July, Democratic senators sent a letter Ask OpenAI CEO Sam Altman “about how OpenAI addresses emerging security issues.” Last month, a group of current and former OpenAI employees published an article open letter Concerns were described about a lack of oversight and a lack of protection for those who wish to report.
In May, a former OpenAI board member spoke about Altman’s temporarily dismissed November, he said The board was repeatedly provided with “inaccurate information about the few formal security processes the company did implement.”
That month, OpenAI decided A year after announcing its launch, the company has disbanded a team focused on the long-term risks of artificial intelligence. Team leaders Ilya Sutskever and Jan Leike, announce departure May from OpenAI. Leike wrote in a post on X that OpenAI’s “safety culture and processes have given way to shiny products.”
watch: OpenAI is the undisputed leader of the artificial intelligence supercycle