AI
Global Protests Demand Pause on AI Development Amid Calls for Safer Regulations
To go back to this article, go to My Profile and then click on View saved stories.
Matthew Reynolds
Demonstrators Rally Against AI, Divided Over Tactics
On Monday, around 20 demonstrators gathered on a side road next to the Department of Science, Innovation and Technology's main office in downtown London, synchronizing their protest chants.
"Demanding action, the crowd chants, 'What's our goal? Secure AI! And when should we have it?' There's a moment of uncertainty. 'Eventually?' suggests one voice tentatively."
A cluster of predominantly young males gather briefly before launching into a fresh slogan. "What's our demand? Halt AI! And when do we demand it? Immediately!"
The demonstrators belong to Pause AI, an activist collective calling for a halt in the advancement of significant AI technologies, which they believe could endanger human civilization's prospects.
Demonstrations associated with PauseAI are occurring worldwide, including in cities such as San Francisco, New York, Berlin, Rome, Ottawa, among several others.
Their objective is to capture the interest of both the electorate and policymakers before the AI Seoul Summit, which serves as a continuation of the AI Safety Summit that took place in the UK in November 2023. However, the informal collective of demonstrators is currently in the process of determining the most effective method to convey their message.
"According to Joep Meindertsma, the creator of PauseAI, the conference failed to produce significant regulatory outcomes. Participants consented to the 'Bletchley Declaration' during the summit, yet Meindertsma believes this accord lacks substantial impact. 'It's merely an initial move, whereas our real need is for enforceable global agreements,' he states."
The coalition is urging for a temporary halt on the development of AI technologies that surpass the capabilities of GPT-4, emphasizing the need for this moratorium to be adopted globally, with a pointed reference to the United States, where numerous top-tier AI research facilities are located. Additionally, the group is advocating for all nations within the United Nations to agree to a pact establishing a global AI oversight body tasked with the authorization of newly launched AI systems and the oversight of significant model training endeavors. Their demonstrations are coinciding with the day OpenAI unveiled an upgraded version of ChatGPT, designed to enhance the bot's ability to mimic human-like interactions.
"History shows that global bans on technology are possible," Meindertsma notes, referencing the Montreal Protocol, an international pact established in 1987 which led to the discontinuation of CFCs and similar substances that were damaging the ozone layer. "There are agreements in place that prohibit the use of laser weapons that can cause blindness. I'm quite hopeful that we can find a method to put a temporary halt on things."
At the London demonstration, participant Oliver Chamberlain expressed skepticism about corporations agreeing to halt their artificial intelligence research. Despite his doubts, the gravity of the situation drove him to join the protest. Chamberlain believes that only significant legislative control over AI could improve his outlook on the matter.
Additionally, the strategy for accomplishing the objectives of PauseAI is under debate. Within the organization's Discord channel, various participants have proposed organizing peaceful protests at the offices of those who develop AI. Specifically, OpenAI has been a major target for such demonstrations. In February, activists associated with Pause AI assembled outside the OpenAI offices in San Francisco in response to the firm's decision to amend its user agreements, eliminating restrictions against the use of its technologies for military and combat purposes.
Han Sheon
Oliver Ben
Gabby Caplan
Mara Silver
One member of the Discord inquired whether it would cause too much disturbance if protesters organized sit-ins or locked themselves to the entrances of AI developers' buildings. "Likely not. In the end, we take the necessary actions for a future that includes humanity, while there's still an opportunity," was the response.
Meindertsma's concerns regarding AI were sparked by "Superintelligence," a book penned in 2014 by philosopher Nick Bostrom, which introduced the notion that highly sophisticated AI technologies could threaten the very survival of humanity. Inspired by the same book, Joseph Miller took the lead in organizing a demonstration for PauseAI in London.
The unveiling of OpenAI's advanced language model, Chat-GPT 3, in 2020, raised significant concerns for Miller regarding the direction in which artificial intelligence was heading. "I quickly came to understand that the issues we face with AI aren't far-off concerns; they're immediate, given how sophisticated AI has become," he remarks. Subsequently, Miller became part of a nonprofit dedicated to AI safety research and subsequently took a role with PauseAI.
The thoughts and concepts presented by Bostrom have significantly impacted the "effective altruism" movement, which encompasses a wide range of individuals committed to long-termism. This philosophy emphasizes the importance of shaping the distant future as a current ethical obligation. While numerous founders of PauseAI have their origins in the effective altruism community, they are eager to extend their appeal beyond theoretical discussions to attract broader backing for their initiative.
Pause AI US's leader, Holly Elmore, is keen for the initiative to encompass a wide array of individuals such as creatives, authors, and intellectual property holders who face financial threats due to the emergence of AI technologies capable of replicating artistic output. "My approach is practical; I focus on the outcomes. However, what really propels me into action is the unfairness stemming from the absence of permission by the firms developing AI technologies," she states.
"There's no need to prioritize one AI risk over another if we're considering a halt as the remedy. Halting is the singular approach that tackles every concern."
Miller reiterated this sentiment, mentioning his conversations with artists who have seen their income affected by the rise of AI art creation tools. "These issues exist now and indicate potentially more severe consequences in the future."
Gideon Futerman, a participant in the London demonstrations, is busy distributing flyers to government employees exiting the opposite structure. He joined the movement last year and notes, "The concept that we might actually achieve a halt has genuinely begun to resonate since that time."
Futerman holds a hopeful view on the power of protest movements to redirect the development of emerging technologies. He highlights how resistance to genetically modified organisms played a key role in Europe's rejection of the technology during the 1990s. Similarly, nuclear power faced considerable public opposition. Futerman notes that while these movements might not always have been correct in their assertions, they demonstrate that widespread protest can halt the progress of technologies, even those that offer solutions for reducing carbon emissions or increasing agricultural yield.
In the heart of London, a collective of demonstrators navigates the road, intent on distributing pamphlets to the throng of bureaucrats exiting the state buildings. The majority appear distinctly indifferent, though a few accept a pamphlet. That morning, Rishi Sunak, the UK's Prime Minister who had convened the inaugural AI Safety Summit six months prior, delivered an address acknowledging concerns surrounding artificial intelligence. However, following that brief acknowledgment, he shifted his emphasis towards the positive aspects AI could bring.
The leaders of Pause AI, in discussions with WIRED, mentioned that they are not currently contemplating more aggressive forms of protest like sit-ins or setting up camps close to AI facilities. "Our approach and strategies are quite restrained," Elmore commented. "I aim for Pause AI to serve as a moderate foundation for numerous groups in this field. We would definitely not support any form of violence. Beyond that, I aspire for Pause AI to be regarded as highly reliable."
Meindertsma concurs, expressing that further radical measures are not warranted currently. "I genuinely wish that additional steps won't be necessary. I anticipate that they won't be. I don't see myself as someone who would spearhead a campaign that crosses legal boundaries."
The founder of Pause AI is optimistic that his initiative can move away from being labeled as pessimistic about AI. "A pessimist is someone who loses faith in humanity," he states. "I'm someone with a positive outlook; I'm convinced we can make a difference in this situation."
Recommended for You …
Delivered to your email: Discover the latest in AI innovations with Will Knight's Fast Forward series.
He transferred a cryptocurrency platform's contents to a USB drive—then vanished
Live deepfake love cons are now a reality
Excitement for Boomergasms is
Heading outside? Check out the top sleeping bags for all types of adventures
Caroline Haskins
Profile: Caroline Haskins
Carolyn Haskins
Lauren Goode
Author: Morgan Me
Elaina Klein
Jessica Mitchell
Jessica Mitchell
Additional Content from WIRED
Evaluations and Manuals
© 2024 Condé Nast. All rights reserved. When you buy products via our website, WIRED may receive a share of the revenue, thanks to our Affiliate Partnerships with retail partners. Content from this site is protected and cannot be copied, shared, distributed, or used in any form without explicit written consent from Condé Nast. Advertising Choices
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.