AI
Unlocking the Power of Persuasion: How OpenAI’s Latest Venture Aims to Transform Public Health Through AI
To go back to this article, navigate to My Profile and then look for saved stories.
OpenAI Explores Its Influential Potential
This week saw a collaboration between OpenAI's CEO, Sam Altman, and Thrive Global's founder and CEO, Arianna Huffington, as they co-authored an article in Time promoting Thrive AI, a new venture supported by both Thrive and the financial backing of OpenAI’s Startup Fund. The article posits that AI holds the potential to significantly benefit public health by encouraging individuals towards healthier lifestyles. Altman and Huffington explain that Thrive AI aims to develop a comprehensive personal AI mentor, providing timely, personalized suggestions and advice that empower individuals to enhance their health through daily actions.
Their perspective offers an optimistic view on what could potentially be one of artificial intelligence's most significant dichotomies. Current AI systems possess the skill to influence individuals effectively, and it's uncertain how much stronger this ability might grow as they evolve and access an increasing amount of personal information.
Aleksander Madry, who is currently taking a sabbatical from his position at the Massachusetts Institute of Technology, heads a group named Preparedness at OpenAI, dedicated to addressing this specific problem.
"In a May discussion with WIRED, Madry explained that a key aspect of Preparedness involves the concept of persuasion. He elaborated on the idea of utilizing these frameworks to effectively influence individuals."
Madry explains that his attraction to becoming part of OpenAI was fueled by the incredible capabilities of language models and the fact that their associated risks are largely unexplored. "There's practically no research on this," he notes. "This was the motivation behind the Preparedness initiative."
The ability to persuade is a crucial component in platforms such as ChatGPT, contributing significantly to the allure of these chatbots. These language models undergo training with human texts and conversations filled with a myriad of persuasive tactics and strategies. Additionally, they are often optimized to produce responses that tend to be more engaging to users.
A study published in April by Anthropic, a company established by former OpenAI employees, indicates that as language models have evolved in size and complexity, their ability to influence individuals has improved. This study assessed how the opinions of volunteers were swayed after being presented with a statement followed by an argument crafted by AI.
OpenAI is exploring how AI can engage in discussions with users, potentially enhancing its ability to persuade. Madry notes that this research involves volunteers who have agreed to participate, though he does not share any results thus far. He highlights the significant influence of language models, pointing out that humans tend to attribute human qualities to things that interact with us using natural language, making chatbots appear more real and persuasive.
The article from Time suggests that the advantages to health from convincing AI must be balanced with stringent legal protections due to the access these systems may have to extensive personal data. Altman and Huffington argue that it is crucial for lawmakers to establish regulations that both encourage the development of AI technologies and protect individual privacy.
Policymakers have more to deliberate beyond the basics. It's important to evaluate the potential risks of advanced algorithms being exploited. These AI-driven systems could amplify the spread of false information or create highly convincing fraudulent schemes. Furthermore, they could be employed in the marketing of goods.
Madry points out that an important issue, which neither OpenAI nor any other organization has examined, relates to the potential influence and persuasiveness of AI systems that engage with individuals over extended durations. Presently, several businesses provide chatbots that simulate relationships and various personas. AI companions, particularly those that can mimic a partner's scolding, are gaining traction. However, the extent of their addictiveness and convincing power remains mostly unexplored.
Following ChatGPT's launch in November 2022, the buzz and enthusiasm it sparked led OpenAI, independent researchers, and numerous policymakers to focus on the speculative query of whether AI might eventually rebel against those who made it.
Madry expresses concern that the subtle threats of persuasive algorithms might be overlooked. He fears that policymakers are concentrating on the incorrect issues. "There's this notion that merely discussing the topic means we are addressing it, when in reality, we're not focusing on the actual problem," Madry points out.
Recommended for You…
Delivered to your email: Will Knight delves into AI progress with Fast Forward.
Delving into the largest undercover operation by the FBI ever conducted
The WIRED Artificial Intelligence Elections Initiative: Monitoring over sixty worldwide electoral events
Ecuador finds itself without any means to combat the ongoing drought conditions.
Be confident: Here are the top mattresses available for purchase online
Additional Content from WIRED
Insights and Instructions
Copyright 2024 by Condé Nast. All rights reserved. WIRED might receive a share of revenue from the sale of goods bought via our website, as a component of our Affiliate Agreements with store vendors. Content from this website is not to be copied, shared, broadcast, stored, or utilized in any form without explicit consent from Condé Nast. Advertisement Choices
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.