OpenAI last week fired Aleksander Madry, one of OpenAI’s senior security directors, from his position and reassigned him to work focused on artificial intelligence inference, people familiar with the matter confirmed to CNBC.
Madry is the head of OpenAI’s readiness efforts, a team “responsible for tracking, assessing, predicting and helping to prevent catastrophic risks associated with cutting-edge artificial intelligence models.” biology Madry on the Princeton University Artificial Intelligence Initiative website.
OpenAI told CNBC that Madry will still be engaged in core artificial intelligence security work in his new role.
He is also director of MIT’s Center for Deployable Machine Learning and faculty co-lead of the MIT Artificial Intelligence Policy Forum, and is currently on leave, according to the university. website.
Less than a week before the decision to reassign Madry, a group of Democratic senators sent a letter to OpenAI CEO Sam Altman asking “how OpenAI addresses emerging security issues.”
this letterThe article, sent on Monday and viewed by CNBC, also said: “We sought more information from OpenAI about the steps the company is taking to meet its public security commitments, how the company internally evaluates its progress against those commitments, and the Company identification and mitigation of cybersecurity threats.
OpenAI did not immediately respond to a request for comment.
Lawmakers have given the tech startup until Aug. 13 to provide a series of answers to specific questions about its security practices and financial commitments.
It’s part of a summer of growing security concerns and controversy surrounding OpenAI. Google, Microsoft, Yuan and other companies at the helm generative artificial intelligence Arms Race – The market is expected to Up to $1 trillion Revenue in 10 Years – It seems companies in every industry are rushing to add AI-powered chatbots and agents to avoid left behind by competitors.
At the beginning of this month, Microsoft gave up its Observer seat on the OpenAI Board of Directorssaid in a letter seen by CNBC that it can now resign because it is satisfied with the structure of the startup’s board of directors, since a brief dismissal Altman and threatened Microsoft’s huge investment in the company.
But last montha group of current and former OpenAI employees published an article open envelope Describe concerns about the problem AI The industry has grown rapidly despite a lack of oversight and protections for whistleblowers willing to speak out.
“AI companies have strong financial incentives to avoid effective oversight, and we believe customized corporate governance structures are insufficient to change this,” employees wrote at the time.
Days after the letter was published, a source familiar with the matter confirmed to CNBC that the FTC and DOJ Antitrust investigation will be launched Enter OpenAI, Microsoft and Nvidiafocusing on the company’s behavior.
FTC Chairman Lina Khan describe Her agency’s action is to “conduct a market survey of investments and partnerships that are forming between artificial intelligence developers and major cloud service providers.”
Current and former employees wrote in the June letter that AI companies have “a wealth of non-public information” about what their technology can do, the extent of security measures they take and the harm the technology poses to different types of risk levels.
“We also understand the serious risks posed by these technologies,” they wrote, adding that the companies “currently have only a tenuous obligation to share some of this information with governments and none with civil society. We do not believe they can all Reliable” shared voluntarily. “
In May, OpenAI decided A person familiar with the matter confirmed to CNBC at the time that a year after the announcement, the company had disbanded the team focused on the long-term risks of artificial intelligence.
The person, who spoke on condition of anonymity, said some team members are being reassigned to other teams within the company.
The team disbanded after its leaders, OpenAI co-founders Ilya Sutskever and Jan Leike, disbanded. announce departure Since launching in May. Leike wrote in a post on X that OpenAI’s “safety culture and processes have given way to shiny products.”
Ultraman says On X at the time, he was sad to see Leike leave and that OpenAI had more work to do. Soon after, co-founder Greg Brockman release A statement from Brockman and X’s CEO said the company “increases awareness of the risks and opportunities of AGI so that the world can be better prepared for them.”
“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike write on X then. “However, for quite some time I had been at odds with OpenAI leadership regarding the company’s core priorities, until we finally reached a breaking point.”
Lake wrote that he believes the company’s bandwidth should focus more on security, surveillance, preparedness, safety and social impact.
“These problems are difficult to solve, and I worry we won’t be able to achieve this goal,” he wrote. “My team has been sailing against the wind over the past few months. At times we have struggled with[computing resources]It is becoming increasingly difficult to complete this important research.”
Leike added that OpenAI must become a “safety-first AGI company.”
“Building machines that are smarter than humans is inherently dangerous work,” he wrote at the time. “OpenAI has a huge responsibility on behalf of all of humanity. But over the past few years, safety culture and processes have taken a back seat to shiny products.”
Information first report Regarding Madeley’s transfer.