AI
OpenAI’s Latest Leap in AI Safety Spurs Debate: A Step Forward or Just a Drop in the Ocean?
To look over this article again, go to My Profile and then select View saved stories.
OpenAI Promotes Latest Efforts in AI Safety, Though Skeptics Believe It Falls Short
In recent times, OpenAI has been under fire from critics who argue that the company is moving too hastily and carelessly in its pursuit of advancing artificial intelligence technologies. Demonstrating a commitment to the importance of AI safety, the company recently highlighted its new research. This research, according to the company, aims to aid in the examination of AI models, ensuring they remain beneficial and effective as they evolve.
The company has recently highlighted this method as part of a broader range of concepts aimed at enhancing AI security. This approach entails initiating a dialogue between two AI systems, compelling the more advanced system to clarify its thought processes in a way that makes it easier for people to grasp its actions.
"Creating an [artificial general intelligence] that is safe and advantageous is fundamental to our mission," says Yining Chen, a researcher at OpenAI who is part of the project, in a conversation with WIRED.
Up to this point, the project has been applied to an AI system created to tackle straightforward mathematical challenges. The team from OpenAI prompted the AI to articulate its thought process while it worked through questions or puzzles. They also developed a secondary model tasked with verifying the accuracy of the solutions. The findings indicated that this interactive process between the two models prompted the one focused on solving math to become more open and clear in explaining how it arrived at its answers.
OpenAI has announced the publication of a paper that outlines their methodology. "This release aligns with our ongoing commitment to safety research," comments Jan Hendrik Kirchner, one of the OpenAI scientists participating in the study. "Our aim is that this will encourage other researchers to build upon our work and perhaps explore different algorithms too."
Clarity and understandability are major issues for researchers in artificial intelligence aiming to develop advanced systems. Big language models occasionally provide plausible rationales for their decisions, but there is a significant worry that forthcoming models might turn less transparent or even misleading in the justifications they offer—potentially aiming for an unfavorable objective while being dishonest about it.
Today's unveiled study contributes to a wider initiative aimed at deciphering the functionality of substantial language models, which are fundamental to applications such as ChatGPT. This study is among several strategies being pursued to enhance the transparency and safety of advanced AI models. OpenAI, along with various firms, is also investigating more systematic approaches to gain insight into the inner mechanisms of these extensive language models.
In recent times, OpenAI has made public further details about its efforts in AI safety, responding to critiques of its strategy. WIRED discovered in May that a group focused on the investigation of long-term risks associated with AI had been disbanded. This development occurred soon after Ilya Sutskever, a cofounder and pivotal technical figure, left the company. Sutskever was among the board members who temporarily removed CEO Sam Altman from his position last November.
OpenAI initially pledged to enhance the transparency and security of artificial intelligence. However, following the immense popularity of ChatGPT and the increasing pressure from financially strong competitors, critics have charged the organization with focusing more on making significant breakthroughs and capturing a larger portion of the market than on ensuring safety.
Daniel Kokotajlo, a former OpenAI researcher who has openly criticized the organization's stance on AI safety through an open letter, acknowledges the significance of the recent advancements but views them as minor steps forward. He argues that the core issue persists, with companies engaged in the development of this technology still lacking sufficient oversight. "The current scenario remains the same," he states. "We have secretive, unregulated companies competing to create artificial superintelligence without a clear strategy for its management."
An individual familiar with the internal operations of OpenAI, who preferred to remain anonymous due to lack of authorization for public commentary, expressed the need for external regulation of AI firms. They emphasized the importance of genuinely implementing structures and governance systems that favor societal good over financial gains. The focus, according to this person, should not just be on permitting researchers to engage in safety-related activities.
Explore More Options…
Stay informed during the election period with our exclusive Politics Lab newsletter and podcast from WIRED.
Don't believe breakdancing is part of the Olympics? The global champion somewhat concurs.
Investigators unlocked a decade-old encryption key to access a cryptocurrency wallet valued at $3 million.
The astonishing emergence of the globe's inaugural AI beauty contest
Ease the strain on your spine: Discover the top-rated office chairs from our evaluations
Additional Content from WIRED
Evaluations and Tutorials
© 2024 Condé Nast. All rights reserved. Purchases made through our website may result in WIRED receiving a commission as part of our Affiliate Partnership agreements with retail vendors. The content of this site is protected under copyright law and cannot be copied, distributed, broadcast, stored in any form, or used in any other manner without the express written consent of Condé Nast. Ad Choices
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.