AI
From AI’s Inception to Its Ethical Reformation: The Journey of a Google Engineer Towards Open Source AI
To go back to this article, go to My Profile and then click on Saved stories.
Stephen Levy
The Architect Behind Generative AI Aims to Preserve It
Back in 2016, Illia Polosukhin, a Google engineer, found himself venting over lunch to a fellow engineer, Jacob Uszkoreit, about the stagnation in his work on a project aimed at leveraging AI to generate meaningful responses to user queries. Uszkoreit introduced him to a concept he had been mulling over, dubbed self-attention. This conversation sparked an 8-member team effort, culminating in the publication of a groundbreaking paper in 2017 titled “Attention Is All You Need.” This paper unveiled the transformers technology, revolutionizing the field of artificial intelligence and its applications worldwide.
Eight years on, Polosukhin feels a sense of dissatisfaction with the current state of affairs. As an advocate for open-source principles, he expresses unease over the opacity surrounding the development of transformer-based large language models, even by companies that claim to champion transparency. (Hinting at a particular company, perhaps?) The specific training data and the model weights remain undisclosed, leaving no room for external parties to modify or experiment with these models. While Meta, a major tech corporation, claims its systems are open source, Polosukhin disputes the authenticity of this openness. He argues, "Although the parameters are accessible, the lack of transparency about the training data conceals potential biases and influences the models' decision-making processes."
With advancements in LLM technology, he expresses concern over its potential to become increasingly perilous, particularly as the pursuit of financial gain influences its development. He points out, "Businesses argue that they require additional funds to develop more sophisticated models. However, these enhanced models will be more adept at influencing individuals, and they can be more effectively optimized for revenue generation," he notes.
Polosukhin holds no faith in the effectiveness of regulation. He argues that setting boundaries for the models is such a complex task that regulators would inevitably have to depend on the very companies they're supposed to be regulating. He points out that there are scarcely any individuals capable of making informed judgments on model parameters or determining what constitutes an acceptable safety margin. “It’s challenging even for engineers to tackle these issues about model parameters and safety thresholds,” he mentions. He firmly believes that it is unlikely anyone in Washington, DC, could manage this task.
This positions the sector as an ideal target for regulatory domination. "Larger corporations understand how to maneuver within the system," he notes. "They'll ensure their representatives are placed within the committee to guarantee that those being monitored are the ones in control."
Polosukhin advocates for an open source approach, emphasizing that the technology should inherently ensure accountability. He departed from Google and established the Near Foundation, a blockchain/Web3 nonprofit, prior to the release of the transformers paper in 2017. Currently, his firm is slightly shifting its focus to incorporate the values of transparency and accountability into what he terms "user-owned AI." By adopting the framework of blockchain-based crypto protocols, this AI strategy would feature a decentralized system on an impartial platform.
"He emphasizes that the system would be collectively owned. Eventually, there would come a time when expansion is no longer necessary. It's similar to how bitcoin operates—its value may fluctuate, but there's no central authority mandating a specific revenue target, like an additional $2 billion in a year. This approach could be used to harmonize motivations and establish an impartial platform."
Polosukhin mentions that developers are utilizing Near's platform for creating applications compatible with this open-source framework. Near has initiated an incubation program aimed at assisting startups with their projects. A notable application under development is a system designed to allocate small payments to creators contributing content to AI models.
Authored by Dhruv Mehrotra
Unfortunately, you haven't provided the
By Simon Hill
By Angela Watercutter
I mention to Polosukhin that considering its standing, and Web3's inability to dominate the internet so far, comparing crypto to a movement aimed at controlling the era's most unpredictable technology might not instill much confidence. "We indeed require assistance with marketing," he concedes.
A common critique of open source artificial intelligence posits that making powerful AI technologies universally accessible could enable malicious individuals to misuse AI, potentially for spreading false information or devising novel weaponry. Polosukhin counters this notion, arguing that open platforms are not inherently more dangerous than our current systems. He believes that the safety measures touted are merely superficial constraints on the capabilities of these models. According to him, bypassing these restrictions is not particularly challenging.
Polosukhin has been actively promoting his concept, engaging with experts across the sector, including several coauthors from his "Attention" project. Among these, Uszkoreit has shown the most agreement, having shared a significant dinner with Polosukhin in 2016. In a conversation I had with Uszkoreit recently, he concurred with the majority of Polosukhin's points, although he's not fond of the label. Rather than calling it "user-owned AI," he suggests the term "community-owned AI."
He is genuinely enthusiastic about the potential of adopting an open source strategy, possibly incorporating Polosukhin's micropayment system, as a solution to the challenging intellectual property issues sparked by AI. Currently, large corporations are locked in a significant legal dispute with the original creators, whose contributions are crucial to the development of their AI technologies. Given these companies' relentless pursuit of profit, this conflict is inherent. Efforts to fairly compensate the creators are likely to be compromised by the companies' ultimate goal of maximizing their own financial gain. This issue could be avoided with a system designed from the ground up to acknowledge and reward such contributions.
"Uszkoreit believes that the issues surrounding intellectual property would be resolved if there was a mechanism to acknowledge the efforts of those who create content. He emphasizes that we are now, for the first time ever, in a position to determine the worth of information to society over extended periods."
A concern raised about the concept of a user-controlled model is the source of funding necessary to build an advanced base model from the beginning. Currently, Polosukhin's initiative employs a variant of Meta’s Llama model, notwithstanding his hesitations. The question arises: who would invest a billion dollars to develop a completely open-source model? Could it be a government entity? Perhaps a large-scale crowdfunding campaign? Uszkoreit muses that a technology firm not quite at the pinnacle of the industry might embark on such a project as a strategy to disrupt their rivals. The answer remains uncertain.
Polosukhin and Uszkoreit firmly believe that the absence of AI owned by users before achieving artificial general intelligence—where AI starts to enhance itself—will lead to catastrophic outcomes. Uszkoreit expresses this starkly by saying, "we're in deep trouble." They both agree that the development of self-improving intelligence by AI researchers is only a matter of time. Should the status quo remain, it's likely that a major tech company will spearhead this advancement. "This could trigger a domino effect where a handful of companies, or perhaps just the first to succeed, gains access to a virtually unlimited wealth generator. This in turn could monopolize economic resources, creating a win-lose scenario that would devastate the economy, an eventuality that must be avoided at all costs."
By [Your Name]
By Paolo Armelli
Authored by Simon
Authored by Angela Watercutter
When asked if he ever reconsiders his involvement in advancing AI technology due to potential negative outcomes, Polosukhin maintains a clear stance. He believes that the advancements would have occurred regardless of their participation, possibly just at another time. "We need to adopt a new framework to progress, and that's my focus," he asserts. By developing AI that is controlled by its users, it ensures a level playing field where major players like OpenAI and Google can't monopolize the industry. This approach, according to him, balances both the risks and advantages. It's a change that could be seen as revolutionary.
Temporal Journeys
At the outset of this year, I discussed the creation of transformers by the writers behind the "Attention Is All You Need" manuscript. This innovation sparked the surge in generative AI technology. A seemingly casual chat between Polosukhin and Uszkoreit turned out to be a pivotal event in what would be recognized as a monumental success by the destined group of eight Google employees—who have since all departed from the firm.
In 2016, during a lunch at a Google café, Uszkoreit engaged in conversation with Illia Polosukhin, a scientist hailing from Ukraine who had spent almost three years at Google. Polosukhin's role involved developing systems to provide instant answers to queries entered in Google's search box, but the project was facing challenges. Polosukhin explained the difficulty, noting, "To deliver an answer on Google.com, the solution must be both cost-effective and high-performing," due to the extremely brief time frame available to provide a response. Upon hearing Polosukhin’s frustrations, Uszkoreit quickly proposed a solution. "He recommended exploring self-attention as an option," Polosukhin recalled.
Polosukhin occasionally teamed up with a colleague, Ashish Vaswani, who was born in India and spent a significant part of his upbringing in the Middle East. Vaswani pursued his PhD at the University of Southern California, where he was a part of the prestigious machine translation group. Following his studies, he relocated to Mountain View to become part of Google, specifically joining an innovative team known as Google Brain. Vaswani saw Google Brain as an avant-garde collective, convinced that neural networks would push the boundaries of human comprehension. Despite his involvement, he was on the lookout for a significant project to dedicate himself to. Situated in Building 1965, right next to Polosukhin's language-focused team in Building 1945, Vaswani got wind of the self-attention concept. Seeing potential, he decided to commit to this new endeavor.
Authored by Dhruv Mehrotra
Unfortunately, you didn't provide the
Authored by Simon
By Angela Watercutter
The trio of scientists collaborated to create a blueprint titled “Transformers: Iterative Self-Attention and Processing for Various Tasks.” From the very beginning, they decided on the term “transformers” for their project, inspired by Uszkoreit's early fascination. He explains the choice was influenced by the concept of altering the incoming data in such a sophisticated manner that it mimics, or seems to mimic, human-level comprehension. Uszkoreit also reminisces about his childhood days spent playing with the Transformers toys from Hasbro. “As a little kid, I owned two Transformers action figures,” he recalls. To cap off their design document, they included a whimsical illustration featuring six Transformers amidst a mountainous backdrop, engaging in a laser battle. The paper opened with a bold declaration of their confidence: “We are awesome.”
The complete article is available for reading here.
Inquiry Corner
Dileep inquires, "What's your take on the optimal way for students to utilize existing AI resources, and how should educators guide them in this process? I've been promoting the use of tools like ChatGPT to give them an advantage. However, it's becoming blatantly obvious to readers when they're employing ChatGPT."
Thank you for your inquiry, Dileep, and for offering some background information. You mentioned you're an educator at a school with a significant number of students from immigrant backgrounds who are also working to support their studies. It appears you're grappling with the dilemma of whether employing generative AI as a quick solution for satisfactory assignments is actually giving them an advantage, or rather, giving a misleading sense of having acquired a real skill, as you put it.
Let's set aside ChatGPT's capabilities for gathering information and offering overarching views on topics or condensing lengthy documents—these applications don't seem to stir much debate. (Though, I would argue that engaging with the original materials offers a more profound comprehension that is beneficial in the long term.) The current debate centers on the extent to which students should depend on AI to complete their assignments. This debate mirrors the historical discussion about whether using calculators for math assignments constitutes cheating. The opposing view argues that in today's world, manual calculations are no longer necessary, and it's unrealistic to ignore this fact. This modern perspective has ultimately prevailed.
Authored by Dhruv Mehrotra
Authored by Paolo Armelli
Authored by Simon
Authored by Angela Watercutter
However, crafting essays that draw upon research and reasoned argumentation stands as a distinct challenge. Structuring one's findings and articulating them through a reasoned debate serves as an invaluable exercise in cultivating an empirical mindset. Engaging in persuasive writing compels the author to address the concerns of a potentially critical audience, necessitating the presentation of well-supported, cogent arguments. Perfecting this ability sharpens one's intellectual faculties. It could be argued that the emergence of ChatGPT and similar technologies might eventually make essay writing redundant, as if these language models were mere word processors. Yet, the ultimate goal isn't merely to produce text for evaluation but to navigate a process that fosters logical and empathetic thinking.
The issue isn't merely that the essays produced by ChatGPT are of average quality. Rather, the core of the matter is the lack of substantial educational benefit derived from using such shortcuts. To gain an advantage in life, students need to engage in the challenging process of gathering evidence, understanding and applying logic, and clearly articulating their ideas. This foundation will benefit them in all aspects of their professional lives and beyond. For some students, this task will be particularly tough—I have firsthand experience teaching entry-level composition at a public university and have witnessed the struggles students face. Moreover, persuading them to abandon the allure of this novel tool, ChatGPT, will not be easy. However, the rewards of doing so will be immensely valuable.
Feel free to send inquiries to mail@wired.com. Please include ASK LEVY in the email's subject heading.
Apocalyptic Times Report
The recent June heatwave claims yet another casualty: a wax statue of Abraham Lincoln, specifically its head, which succumbed to the heat and had to be detached. The creator of the piece designed it to endure up to 140 degrees, but the year 2024 proved too much for it.
Final Thoughts
A writer shares her reasons for consenting to become an AI bot that engaged in conversations with individuals reading Romeo and Juliet.
Were you under the impression that SimCity was merely about urban planning? Think again, it was actually a covert blueprint for libertarian ideals!
By [Your Name]
Unfortunately, you didn't provide the
Authored by Simon
Authored by Angela Watercutter
Meta's Ray-Ban intelligent eyewear vows to break down language obstacles. However, a visit to Montreal showed a different reality.
Are you commemorating the festive season? Here are the top fire pits to convene around. Kindly refrain from using them in national parks, though.
Ensure you don't miss out on upcoming editions exclusive to subscribers of this column. Take advantage of a 50% discount for Plaintext readers and subscribe to WIRED today.
Suggested For You …
Direct to your email: Dive into the future with Will Knight's Fast Forward, highlighting the latest progress in artificial
Inside the largest undercover operation ever conducted by the FBI
The WIRED Artificial Intelligence Elections Initiative: Monitoring over 60 worldwide electoral events
Ecuador finds itself without electricity due to a severe drought.
Be confident: These are the top mattresses available for purchase on the internet.
Reece Rogers
Journalist: Pa
Kate Knibbs
Knight Will
Knight Will
Knight Will
Knight Will
John Doe
Additional Content from WIRED
Evaluations and Manuals
© 2024 Condé Nast. All rights reserved. WIRED might receive a share of the revenue from items bought via our website, thanks to our Affiliate Partnerships with sellers. Reproducing, distributing, transmitting, storing, or using the content from this site in any way is prohibited without the explicit written consent of Condé Nast. Advertisement Choices
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.