AI
Shifting Priorities and Leadership Changes: The Dissolution of OpenAI’s Superalignment Team Amidst AI Evolution
To look at this article again, go to My Profile, and then check out the stories you've saved
Knight Will
OpenAI Dissolves Its Advanced AI Safety Group
In July of the previous year, OpenAI revealed it had established a dedicated research group focused on anticipating the emergence of highly advanced AI that could potentially surpass and dominate its human developers. The group was co-led by Ilya Sutskever, the chief scientist at OpenAI and one of the founding members of the company. OpenAI allocated a fifth of its computational resources to support this team's efforts.
OpenAI has confirmed the dissolution of its "superalignment team" following the exit of several key researchers, the recent announcement of Sutskever's departure from the company, and the stepping down of the team's co-lead. The responsibilities and projects of this team will now be integrated into the broader research initiatives within OpenAI.
Sutskever's exit caught significant attention since he not only co-founded OpenAI with Sam Altman in 2015 and played a crucial role in shaping the research that resulted in ChatGPT, but he was also among the quartet of board members who ousted Altman in November. Altman reclaimed his position as CEO just five tumultuous days later, following a widespread rebellion among OpenAI employees and the negotiation of an agreement that saw Sutskever and two other board directors step down. Shortly after the announcement of Sutskever's departure on Tuesday, Jan Leike, who previously worked at DeepMind and co-led the superalignment team with Sutskever, revealed on X that he too had stepped down.
Sutskever and Leike did not reply to inquiries for their input. While Sutskever did not provide reasons for his departure, he expressed endorsement for OpenAI's ongoing direction in a message on X. “The progress of the company has been remarkably impressive, and I have no doubt that under its present management, OpenAI will develop AGI that is safe and advantageous,” he stated.
On Friday, Leike shared a message on X, stating that his decision was influenced by a conflict regarding the company's focus and the level of resources provided to his team.
"Leike expressed that for a considerable period, there has been a divergence in views with OpenAI's senior management regarding the fundamental objectives of the company, culminating in an irreconcilable difference," Leike stated. "In recent months, my team has faced considerable challenges, often grappling with insufficient computing resources, making it increasingly difficult to pursue essential research activities."
The breakup of OpenAI's team dedicated to superalignment contributes to the growing signs of internal restructuring at the company following the governance turmoil last November. According to a report by The Information last month, two members of the team, Leopold Aschenbrenner and Pavel Izmailov, were let go for divulging confidential information. Additionally, a post on an online forum attributed to William Saunders revealed that he departed from OpenAI in February.
Recently, it seems that OpenAI has seen the departure of two more staff members who were involved in the organization's AI policy and governance initiatives. According to his LinkedIn profile, Cullen O'Keefe departed from his position as the lead for policy frontiers research in April. Meanwhile, Daniel Kokotajlo, a researcher at OpenAI known for his co-authorship of various studies highlighting the risks of advanced AI models, reportedly left the company due to a loss of faith in its commitment to responsible conduct, especially concerning the development of Artificial General Intelligence (AGI). This was revealed in a message attributed to him on an online forum. Attempts to get comments from the researchers believed to have left were not successful.
Rogers Reece
Louryn Strampe
Brendan I. Koerner
Gilbertson, Scott
OpenAI refrained from making any statements regarding the exit of Sutskever and other participants from the superalignment squad, as well as the continuing efforts focused on addressing future AI hazards. John Schulman, who jointly oversees the group in charge of refining AI models post-training, will now spearhead the investigation into the dangers posed by increasingly sophisticated models.
The team focusing on superalignment wasn't alone in grappling with the challenge of managing artificial intelligence, even though it was presented as the primary group addressing the most advanced aspects of this issue. When the superalignment team was introduced in a blog post last summer, it was mentioned: "At this point, we lack a strategy for guiding or controlling an AI that could become superintelligent, and stopping it from acting unpredictably." OpenAI is committed through its charter to the safe advancement of what's known as artificial general intelligence, or technologies that match or surpass human capabilities, ensuring it's done for the good of humanity. Leaders at OpenAI, including Sutskever, have repeatedly emphasized the importance of moving forward with caution. Nonetheless, OpenAI has been at the forefront of creating and sharing experimental AI endeavors with the public.
OpenAI used to stand out among leading AI research facilities for its leaders', such as Sutskever's, readiness to discuss developing AI that surpasses human intelligence and the risk of it becoming a threat to humans. However, this conversation shifted significantly last year following the rise of ChatGPT, which catapulted OpenAI to the forefront of global technology attention. With the emergence of ChatGPT and the anticipation of even more advanced AI systems, the concern over AI's potential to harm individuals or society at large became a more commonly accepted topic among researchers and policymakers.
The pervasive sense of existential dread has diminished, and AI hasn't taken another significant stride forward yet, but the debate over AI governance continues to be fervent. This week, OpenAI introduced an updated version of ChatGPT, potentially altering how individuals interact with the technology in impactful, and possibly challenging, ways.
Sutskever and Leike's exit follows closely on the heels of OpenAI unveiling its latest major innovation—a sophisticated "multimodal" AI system known as GPT-4o. This advancement enhances ChatGPT's capability to perceive the environment and engage in dialogue in a manner that more closely resembles human interaction. During a live broadcast, the updated ChatGPT was showcased displaying human-like emotional responses and even trying to engage in flirtatious behavior with users. OpenAI has announced plans to release this new interface to its subscribing members in the next few weeks.
There's no evidence to suggest that the recent exits are linked to OpenAI's push towards creating AI that more closely resembles human intelligence or their endeavors in product release. However, the newest breakthroughs do spark concerns related to privacy, the potential for emotional exploitation, and threats to cybersecurity. OpenAI also has a separate research unit known as the Preparedness team, dedicated to addressing these concerns.
Latest Update as of May 17, 2024, 12:23 PM Eastern Time: We have refreshed this article to incorporate remarks made by Jan Leike on X.
Suggested for You…
Delivered to your email: Will Knight delves into the latest AI developments in his Fast Forward series.
He transferred the contents of a digital currency platform to a USB drive and then vanished.
The emergence of instant deepfake love cons is upon us
Excitement for Boomergasms is
Heading outside? Check out the top sleeping bags for all your adventures
Reece Rogers
Knight Will
Knight Will
Kate Knibbs
Knight Will
Knight Will
Knight Will
Knight Will
Additional Content from WIRED
Evaluations and Manuals
© 2024 Condé Nast. All rights reserved. Purchases made through our site from products linked via our Affiliate Partnerships with retail outlets may generate revenue for WIRED. Reproduction, distribution, transmission, caching, or any other form of utilization of the content on this site is strictly prohibited without the express written consent of Condé Nast. Advertising Choices
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.