OpenAI Artificial Intelligence Security Team Exposed to Disbandment: Adjustment or Liquidation?
However, after Sutskever and Leike left OpenAI one after another, OpenAI chose to disband the AI safety team and integrate its functions into other research projects within the organization.
The OpenAI AI Safety Team is reported to have been disbanded: an adjustment or a liquidation?
On May 21st, media reported that the team within OpenAI dedicated to researching AI safety has been completely dissolved or has joined other research groups within the company, marking another significant personnel change for OpenAI in a short period.
On May 14th, OpenAI co-founder and Chief Scientist Ilya Sutskever announced his departure, and a few hours later, Jan Leike, the head of the Super Alignment team, also announced his resignation. It is said that after the departure of two key members of the safety team, the remaining members lost confidence in the leadership of CEO Sam Altman.
Leike stated upon his departure that his main reason for leaving the OpenAI team was concern over OpenAI's priorities. He believes that the current OpenAI places too much emphasis on product development and neglects the consideration of AI safety. Leike expressed his dissatisfaction on social media, stating that the current leadership of OpenAI is wrong in choosing core priorities. They should emphasize the safety and readiness of AI, based on the development of artificial general intelligence (AGI), rather than recklessly promoting the technology.
It is understood that OpenAI's AI safety team was established in July 2023 and has been preparing for the emergence of advanced artificial intelligence that is smarter and more powerful than its creators. Sutskever, as a veteran of OpenAI, was appointed the head of this team. At the beginning of the team's establishment, OpenAI provided a lot of support and once invested nearly 20% of the company's computing resources.
However, after Sutskever and Leike left OpenAI one after another, OpenAI chose to disband the AI safety team and integrate its functions into other research projects within the organization.
It is said that the dissolution of the AI safety team is the aftermath of Sam Altman's resignation storm in November last year. At the end of 2023, Altman was fired by the OpenAI board of directors. According to foreign media reports, the trigger was a letter from several researchers warning that a powerful AI discovery could pose a threat to humanity. However, under the threat of resignation from more than 700 employees and pressure from Microsoft and other parties, Altman quickly returned to OpenAI as CEO and formed a new board.
According to sources, OpenAI's Chief Technology Officer Mira Murati told employees that a letter about the breakthrough of Q* (pronounced Q-Star) AI prompted the board to take action. The progress made by OpenAI on Q* has led some insiders to believe that this could be their breakthrough to finding "super intelligence (i.e., AGI)". OpenAI defines AGI as "an AI system that is smarter than humans."
From the post-facto declassification, it appears that Sutskever was the mastermind behind Altman's departure from OpenAI, and the dissolution of the AI safety team this time is indeed quite reminiscent of a liquidation.
·Original
Disclaimer: The views in this article are from the original Creator and do not represent the views or position of Hawk Insight. The content of the article is for reference, communication and learning only, and does not constitute investment advice. If it involves copyright issues, please contact us for deletion.