OpenAI is disbanding its “AGI Readiness” team, which advised the company on its own ability to handle increasingly powerful artificial intelligence and the world’s readiness to govern the technology, its leader said.
On Wednesday, AGI Readiness senior consultant Miles Brundage via substacked posts. He wrote that his main reasons were that the opportunity cost had become too high, he thought his research would be more impactful externally, he wanted to reduce bias, and he had accomplished the goals he set out to do at OpenAI.
Brundage also wrote that in terms of OpenAI and the world’s readiness for AGI, “Neither OpenAI nor any other cutting-edge lab is ready, and neither is the world.” Brundage plans to start his own nonprofit , or join an existing nonprofit focused on AI policy research and advocacy. He added, “AI cannot be as safe and beneficial as it can be without a concerted effort.”
Former AGI readiness team members will be reassigned to other teams, the post said.
“We fully support Myers’ decision to conduct policy research outside of industry and are deeply grateful for his contributions,” an OpenAI spokesperson told CNBC. “His plans to fully commit to independent research on artificial intelligence policy give him the opportunity Making an impact on a wider scale, we are excited to learn from his work and track its impact. We are confident that in his new role Miles will continue to raise the bar for quality in industry and government decision-making.
In May, OpenAI decided Disband its super-aligned teamA person familiar with the matter confirmed to CNBC at the time that the company was focused on the long-term risks of artificial intelligence, just a year after the company was announced.
News of the disbanding of the AGI Readiness team comes amid possible plans by OpenAI’s board to reorganize the company into a for-profit business, with three executives – chief technology officer Mira Murati, director of research Bob McGrew and vice president of research Barret Zoph – announcing their departures. same day last month.
Early October, OpenAI closes a buzzy funding round The valuation is $157 billion, including $6.6 billion the company has raised from numerous investment firms and large technology companies. it also received $4 billion The revolving credit facility brings total liquidity to more than $10 billion. The company expects Approximately US$5 billion CNBC confirmed to a source familiar with the matter last month that the company lost $3.7 billion in revenue this year.
And in September, OpenAI announced The safety and security committee the company established in May to deal with disputes over safety procedures will become an independent board oversight committee. It recently concluded a 90-day review to assess OpenAI’s processes and safeguards before making recommendations to the board, and the findings were released publicly blog post.
News of the executive departures and board changes come amid mounting security concerns and controversy surrounding OpenAI this summer. Google, Microsoft, Yuan and other companies at the helm generative artificial intelligence Arms Race – The market is expected to Up to $1 trillion Revenue in 10 Years – It seems companies in every industry are rushing to add AI-driven chatbots and agents to avoid left behind by competitors.
July, OpenAI Reassign Alexander MadridPeople familiar with the matter confirmed to CNBC at the time that a senior security director at OpenAI was transferred to focus on artificial intelligence reasoning.
According to Madry’s profile on the Princeton University AI Project website, Madry is the leader of OpenAI preparations, a team “responsible for tracking, assessing, predicting and helping to prevent catastrophic risks associated with cutting-edge AI models.” OpenAI told CNBC at the time that Madry would still be doing core AI security work in his new role.
At the same time as the decision to reassign Madry was made, Democratic senators sent a letter to OpenAI CEO Sam Altman regarding “questions about how OpenAI addresses emerging security issues.”
The letter, seen by CNBC, also said: “We sought additional information from OpenAI about the steps the company is taking to meet its public security commitments, how the company internally evaluates its progress against those commitments, and what the company identifies as and mitigating cybersecurity threats.
Microsoft gave up its Observer seat on the OpenAI Board of Directors In July, CNBC said in a letter that it could now resign because it was satisfied with the structure of the startup’s board, which has been reshuffled since the uprising that led to the protests. brief dismissal Altman and threatened Microsoft’s huge investment in the company.
but in junea group of current and former OpenAI employees published an article open letter Describe concerns about the problem AI The industry has grown rapidly despite a lack of oversight and protections for whistleblowers willing to speak out.
“AI companies have strong financial incentives to avoid effective oversight, and we believe customized corporate governance structures are insufficient to change this,” employees wrote at the time.
Days after the letter was published, a source familiar with the matter confirmed to CNBC that the FTC and DOJ were Antitrust investigation will be launched Enter OpenAI, Microsoft and NVIDIAfocusing on the company’s behavior.
FTC Chairman Lina Khan descriptive Her agency’s action is to “conduct a market survey of investments and partnerships that are forming between artificial intelligence developers and major cloud service providers.”
Current and former employees wrote in the June letter that AI companies have “a wealth of non-public information” about what their technology can do, the extent of security measures they take and the harm the technology poses to different types of risk levels.
“We also understand the serious risks posed by these technologies,” they wrote, adding that the companies “currently have only a tenuous obligation to share some of this information with governments and none with civil society. We do not believe they can all Reliable” shared voluntarily. “
OpenAI’s Superalignment team, declare The organization, which disbanded last May, focused on “scientific and technological breakthroughs to guide and control artificial intelligence systems that are smarter than us.” At the time, OpenAI said it would devote 20% of its computing power to the initiative within four years.
The team disbanded after its leaders, OpenAI co-founders Ilya Sutskever and Jan Leike, disbanded. announce departure Since launching in May. Leike wrote in a post on X that OpenAI’s “safety culture and processes have given way to shiny products.”
Ultraman says On X at the time, he was sad to see Lake leave and that OpenAI had more work to do. Soon after, co-founder Greg Brockman release A statement from Brockman and X’s CEO said the company “increases awareness of the risks and opportunities of AGI so that the world can be better prepared for them.”
“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike write on X then. “However, for quite some time I had been at odds with OpenAI leadership regarding the company’s core priorities, until we finally reached a breaking point.”
Lake wrote that he believes the company’s bandwidth should focus more on security, surveillance, preparedness, safety and social impact.
“These problems are difficult to solve and I fear we will not achieve this goal,” he wrote at the time. “My team has been sailing against the wind over the past few months. At times we have struggled for (computing resources) and it has become increasingly difficult to complete this important research.”
Leike added that OpenAI must become a “safety-first AGI company.”
“Building machines smarter than humans is inherently dangerous work,” he wrote on Give place to the shining field.