AI
OpenAI Whistleblowers Ring Alarm Bells Over AI Risks and Workplace Retaliation
To go back to this article, head over to My Profile, and then click on View saved stories.
Knight Will
Employees at OpenAI Raise Alarm Over Hazardous Culture and Reprisals
Several active and past employees of OpenAI have released an open letter cautioning that the organization, along with its competitors, is developing artificial intelligence recklessly, lacking adequate supervision, and silencing workers who may observe reckless behaviors.
"The letter posted on righttowarn.ai highlights dangers that include deepening current disparities, the spread of false information and manipulation, to the peril of uncontrolled AI technology possibly leading to the eradication of humanity," the document states. "Without proper regulatory supervision over these companies, present and past workers stand as some of the few individuals capable of ensuring they are held responsible."
The message urges not only OpenAI but also every AI firm to pledge not to retaliate against employees who voice concerns about company practices. It further demands that organizations implement "verifiable" methods allowing staff to anonymously express their opinions on company operations. "Standard whistleblower safeguards fall short as they concentrate on unlawful actions, while many of the dangers we're worried about remain unregulated," the message states. "Given the track record of similar instances throughout the sector, some of us legitimately worry about facing different types of backlash."
Last month, OpenAI faced backlash following a report by Vox which disclosed that the firm had warned its staff about the potential retraction of their shares if they refused to agree to non-disparagement clauses. These clauses prevented them from speaking negatively about the company or acknowledging the existence of the agreement. Sam Altman, the CEO of OpenAI, addressed the issue on X, stating he had no knowledge of these practices and that the company had never actually retracted any employee's shares. He further announced that this clause would be eliminated, thereby allowing employees to voice their opinions freely.
Do you work at OpenAI now, or have you worked there in the past? We're interested in hearing your story. Please reach out to Will Knight from a personal phone or computer, either by email at will_knight@wired.com or through a secure message on Signal at wak.01.
Recently, OpenAI has updated its strategy for handling safety concerns. In the previous month, a significant shift occurred when a team within OpenAI, tasked with evaluating and mitigating the potential long-term dangers of the firm's advanced AI technologies, was disbanded. This move came after the departure of key individuals, with the rest of the team being integrated into different groups. Shortly after this reorganization, OpenAI revealed the formation of a new Safety and Security Committee, under the guidance of Altman and additional board members.
In November last year, Altman was dismissed from OpenAI's board on accusations of withholding information and intentionally deceiving them. Following a highly publicized conflict, Altman was reinstated at the company, leading to the removal of the majority of the board members.
"OpenAI is honored by its history of delivering highly effective and secure AI solutions, and stands by its methodical way of managing risk," stated OpenAI's Liz Bourgeois. "Given the importance of this technology, we acknowledge the necessity for thorough discussion and are committed to ongoing dialogue with governmental bodies, societal groups, and other global communities."
The endorsers of the letters comprise individuals involved in safety and governance roles at OpenAI, current staff members who opted to remain anonymous, and experts from competing AI firms. Additionally, prominent AI scholars such as Geoffrey Hinton and Yoshua Bengio, both recipients of the Turing Award for their groundbreaking work in AI, along with Stuart Russell, a foremost authority on AI safety, have also shown their support.
Authored by Joseph
Authored by Matt
Authored by Matt
Authored by Marah Eakin
The signatories of the letter are ex-staff members William Saunders, Carroll Wainwright, and Daniel Ziegler, who were previously involved in AI safety projects at OpenAI.
Jacob Hilton, a former OpenAI reinforcement learning researcher who departed the company over a year ago for a new research venture, believes that the general populace is not fully grasping the rapid advancement of this technology. Hilton points out that despite pledges from organizations like OpenAI to develop AI responsibly, there's a lack of stringent supervision to guarantee such promises are kept. “We're advocating for safeguards that would cover all leading-edge AI firms, not solely OpenAI,” he explains.
"Daniel Kokotajlo, formerly part of the AI governance team at OpenAI, states his departure was due to diminished trust in the organization's ability to act ethically. He mentions that certain incidents, which he chooses not to detail, should have been made public."
Kokotajlo suggests that the suggestions made in the letter would lead to increased openness and is optimistic that OpenAI, among others, will modify their policies in light of the backlash against non-disparagement clauses. He also expresses concern over the rapid pace at which AI technology is progressing. "In the coming years, the implications are going to become significantly more serious," he asserts, expressing his strong belief in this outcome.
Revision: June 3, 2024, 5:50 PM Eastern Time: OpenAI has provided feedback on this article, prompting an update.
Suggested for You…
Delivered to your email: Will Knight delves into the latest progress in artificial intelligence with Fast Forward.
Welcome to the chaotic world of programmatic advertising.
What is the required number of electric vehicle (EV) charging locations to supplant traditional fuel stations in the
A charitable organization aimed to reform the tech industry but ended up losing grip on its internal operations.
Eternal Sunshine: Discover the Top Sunglasses for Every Excursion
Knight Will
Knight Will
Reece Rogers
Name: Steven
Knight Will
Caroline Haskins
Knight Will
Knight Will
Further Insights from WIRED
Evaluations and Instructions
© 2024 Condé Nast. All rights reserved. Purchases made through our website may result in WIRED receiving a commission as a part of our Retailer Affiliate Program. The content of this website is protected and cannot be copied, shared, or used in any form without the explicit consent from Condé Nast. Advertising Options
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.