AI
Cutting Through the AI Hype: A Closer Look at the Need for Education in Understanding Generative Technology
To look back at this article, go to My Profile, and then click on View saved stories.
The Excitement Around Generative AI Is Everywhere. Address It Directly Through Learning
Purchasing through the links in our articles may generate a commission for us. This contributes to our journalistic efforts. Find out more. Also, think about subscribing to WIRED
Princeton University's computer science professor, Arvind Narayanan, has gained recognition for exposing the exaggerated claims about artificial intelligence through his Substack titled AI Snake Oil, which he co-authors with doctoral student Sayash Kapoor. The duo has now published a book derived from their widely-read newsletter, focusing on the limitations of AI.
However, it's important to understand that they're not opposed to adopting new technology. Narayanan clarifies in an interview with WIRED that their critique is often misinterpreted as an outright condemnation of all artificial intelligence. He emphasizes that their actual concern is not with the technology itself, but with those who perpetuate false narratives about AI's capabilities.
In the critique titled "AI Snake Oil," the individuals responsible for fueling the ongoing buzz around artificial intelligence are categorized into three main factions: the firms marketing AI technologies, the academics investigating AI, and the media professionals reporting on AI developments.
Promoters of False Expectations
Firms that assert they can foresee future events through the use of algorithms are identified as highly susceptible to being deceptive. Narayanan and Kapoor point out in their publication that such predictive AI technologies, when implemented, typically first disadvantage minority groups and individuals living in poverty. They highlight a case in the Netherlands where a municipal algorithm designed to identify potential welfare fraud unfairly singled out non-Dutch speaking women and immigrants.
The writers also critically examine firms that are primarily concerned with addressing existential threats, such as artificial general intelligence (AGI) – an idea that envisions an algorithm with capabilities surpassing human labor. However, they do not dismiss the concept of AGI outright. Narayanan shares a personal perspective, stating, "Choosing computer science as my career was largely influenced by the opportunity to contribute to AGI development, which was a significant part of my identity and motivation." The issue arises when these organizations give more importance to long-term existential risks over the immediate effects AI technologies have on individuals today, a sentiment echoed by many researchers I have spoken with.
The authors argue that a significant amount of the excitement and misconceptions surrounding the topic can be attributed to poor, unreproducible studies. Kapoor explains, "Our research revealed that in many areas, the problem of data leakage results in overly positive assertions regarding AI's effectiveness." Data leakage occurs when AI is evaluated with data that was already used in its training, akin to giving students the exam answers in advance.
Locate the book here:
Purchasing through the links in our articles could result in us receiving a commission. This contributes to the funding of our journalistic work. Find out more.
In "AI Snake Oil," scholars are criticized for fundamental mistakes, but the indictment of journalists is more severe. The Princeton team contends that journalists often simply rehash press releases, presenting them as original news. They highlight the particularly harmful practice of journalists compromising their integrity to preserve their ties and access to major tech firms and their leaders.
The complaints regarding access journalism seem valid to me. Looking back, I acknowledge that I could have posed more challenging or insightful questions in my discussions with key figures from leading AI firms. However, the authors may be reducing the complexity of the issue too much. Just because major AI corporations grant me access doesn't mean I'm hindered from publishing critical articles about their tech or pursuing investigative reports that I'm aware will anger them. (This holds true even when they enter into commercial agreements, such as the one OpenAI has with WIRED's parent company.)
Reports emphasizing the exaggerated abilities of AI often paint a misleading picture of its actual capabilities. Narayanan and Kapoor point to a 2023 article by New York Times columnist Kevin Roose, featuring a conversation with Microsoft's chatbot under the title “Bing’s A.I. Chat: ‘I Want to Be Alive. 😈’”, as an instance of media contributing to confusion around the notion of AI possessing consciousness. “Roose was among those who published such stories,” Kapoor remarks. “However, it's concerning how repetitive narratives about chatbots desiring life can significantly influence public perception.” Kapoor recalls the ELIZA chatbot from the 1960s as an early example of how people tend to attribute human characteristics to basic AI systems, reflecting a deep-seated tendency to humanize computational algorithms.
When contacted through email, Roose chose not to provide a statement, referring me instead to a section of his column that was published independently from the detailed chatbot dialogue. In that excerpt, he clearly asserts his awareness that the AI lacks sentience. His presentation of the chatbot conversation highlights its "hidden yearning to emulate humanity" alongside its "reflections on its inventors," while the comments section reveals a number of readers expressing concern over the chatbot’s capabilities.
In "AI Snake Oil," the use of imagery in news pieces is scrutinized. Commonplace visual symbols, such as robot photographs heading stories about artificial intelligence, are critiqued. The authors are particularly annoyed by the recurring image of a human brain filled with electronic circuits to symbolize AI's neural networks. Narayanan expresses his displeasure, stating, “We're not huge fans of circuit brain. The metaphor, rooted in the notion that intelligence equates to computational ability, is highly problematic.” He recommends that images of AI chips or graphics processing units be utilized as visual representations in articles about artificial intelligence.
The insistence on taking the AI trend seriously stems from the conviction of the authors that large language models (LLMs) will remain a major force within society, necessitating more precise conversations about their role. Kapoor asserts, "The potential influence of LLMs in the coming years cannot be underestimated." Despite the possibility of an AI market downturn, I concur that certain elements of generative technology are likely to persist in some capacity. Moreover, as generative AI applications are rapidly being released to consumers via mobile apps and other platforms, there's an increased urgency for enhanced understanding of AI's nature and its boundaries.
To grasp artificial intelligence (AI) more effectively, it's crucial to recognize the ambiguity of the concept. This term merges various technologies and fields of study, such as natural language processing, into a single, easily marketable category. The notion of AI Snake Oil categorizes AI into two distinct types: predictive AI, focusing on analyzing data to forecast future events, and generative AI, which generates likely responses to queries using historical data.
It's beneficial for individuals who come across AI technologies, whether by choice or by chance, to dedicate some time to understanding fundamental ideas such as machine learning and neural networks. This effort can help clarify the technology for them and protect against the overwhelming amount of AI enthusiasm.
Over the past two years, as I've been reporting on artificial intelligence, I've noticed that while some of our audience is aware of a few shortcomings of generative AI tools, such as their tendency to produce errors or display bias, there's still a broad lack of understanding regarding the full spectrum of their limitations. For instance, in the next edition of AI Unlocked, my newsletter aimed at encouraging our readers to explore and gain a deeper comprehension of AI, we've dedicated an entire lesson to exploring the reliability of ChatGPT in providing medical advice in response to reader inquiries. This includes investigating if it can be trusted with confidential information concerning personal health queries, like those awkward questions about toenail fungus.
An individual might view the AI's responses with increased doubt if they are aware of the sources of the model's training information, which frequently includes extensive internet content or Reddit discussions. This knowledge can reduce their unwarranted confidence in the program.
Narayanan is deeply convinced of the significance of top-notch education, which led him to introduce his kids to both the advantages and pitfalls of artificial intelligence early on. "In my opinion, this education ought to begin in primary school," he states. "This stance stems not only from my role as a father but also from my grasp of the existing studies, guiding me towards a very technology-centric method."
Generative artificial intelligence has reached a point where it can compose reasonably good emails and occasionally assist in communication. However, it's only through the insights of knowledgeable individuals that misunderstandings about this technology can be addressed and a clearer story can be shaped for the future.
Suggested for You …
Directly to your email: A selection of our top stories, curated daily just for you.
Google's Seven-Year Quest to Equip Artificial Intelligence with a Robotic Form
Exclusive Interview: Mark Cuban Aims to Battle Pharmaceutical Intermediaries
The largest bitcoin mining operation in the world is causing a stir in this Texan oil community.
Event: Be part of WIRED Health happening on March 18 in London
WIRED PROMOTIONS
Dyson Airwrap offer: Complimentary $60 Case + $40 Bonus Gift
Enjoy Up To An Additional 45% Discount During Our September Sale
Promo Code for Vista Print: Save 20% on Certain Signs
Discount Code for Newegg: Save 10%
Student Special Offer on Peacock: Only $1.99 per Month for a Year
Discover DJI's Academic Discounts and Learning Deals for 2024
Additional Content from WIRED
Critiques and Manuals
© 2024 Condé Nast. All rights reserved. Purchases made through our website may result in WIRED receiving a share of the revenue, as part of our Affiliate Agreements with retail partners. Reproducing, distributing, transmitting, caching, or any other form of usage of the material found on this website is strictly prohibited without the explicit written consent of Condé Nast. Advertisement Choices
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.