AI
Facing the Future: The Rise of Emotionally Expressive Chatbots and the Ethical Dilemmas They Pose
To go back to this article, head to My Profile and then click on View saved stories.
Knight Will
Gear Up for Emotional Engagements with Expressive Chatbots
Computers imitating human-like social behaviors, emotions, or wit isn't a novel concept. However, their proficiency in these areas is still something we're adjusting to.
On Monday, OpenAI unveiled a groundbreaking update to ChatGPT, hinting at significant enhancements on the horizon. The latest iteration incorporates a revised AI framework known as GPT-4o, which, according to OpenAI, demonstrates improved proficiency in processing and understanding visual and auditory data, a feature they're calling “multimodal.” This means you could simply use your smartphone to show ChatGPT something, such as a shattered coffee mug or a complex mathematical equation, and it could provide recommendations on how to proceed. However, the most captivating feature showcased during OpenAI's presentation was the introduction of a new "personality" attribute to ChatGPT.
The enhanced chatbot communicated using a seductive woman's voice, which reminded several listeners of Scarlett Johansson's portrayal of the AI operating system in the film "Her." During its presentation, ChatGPT utilized this voice to express varied emotions, chuckle at humorous remarks, and provide responses with a hint of flirtation—emulating human-like experiences that software typically lacks.
OpenAI unveiled its latest innovation merely a day prior to Google I/O, Google's yearly event for developers, a timing that seems hardly accidental. Meanwhile, Google presented its own advanced prototype of an AI assistant, named Project Astra, which possesses the ability to engage in smooth voice conversations and interpret the world through video.
Google deliberately avoided giving its assistant human-like qualities, opting instead for a tone that is more reserved and mechanical. Recently, the team at Google DeepMind, which specializes in artificial intelligence within the company, published an extensive research document called “The Ethics of Advanced AI Assistants.” This paper discusses the potential issues that could arise from AI assistants that mimic human behavior too closely. These problems include new privacy concerns, the risk of technological dependencies, and more effective methods for spreading misinformation and manipulation. There's already a significant number of individuals engaging extensively with chatbot friends or AI partners, and this interaction is expected to become even more immersive. In a conversation with Demis Hassabis, who is at the forefront of Google's AI endeavors, prior to a Google event, he mentioned that the research was motivated by the prospects introduced by Project Astra. “Given the technology we’re developing, it’s crucial we address these issues proactively,” he noted. This sentiment seemed even more pertinent following the announcement from OpenAI on Monday.
During its presentation, OpenAI did not address potential dangers. The possibility exists that more immersive and persuasive AI assistants could emotionally influence individuals, enhancing their persuasive power and possibly leading to addiction over time. OpenAI's CEO, Sam Altman, made a nod to Scarlett Johansson on Monday by tweeting "her." Although OpenAI did not respond to a comment request right away, the firm asserts that its foundational principles obligate it to focus on creating AI that is safe and advantageous.
It's definitely worth taking a moment to think about the consequences of computer interfaces that are remarkably realistic and infiltrate our everyday lives, particularly when these are mixed with the corporate drive for profit. Distinguishing whether you're conversing with an actual human during phone interactions will increasingly become a challenge. Businesses will undoubtedly be keen on employing charming automated bots for marketing purposes, and it's predictable that politicians will view these as tools for influencing public opinion. Naturally, criminals will also utilize these advancements to enhance their fraudulent schemes.
Cutting-edge "multimodal" AI assistants, despite lacking flirtatious interfaces, are poised to present novel challenges as they may malfunction in unforeseen ways. Unlike their text-only predecessors such as the initial version of ChatGPT, which were prone to being manipulated into misconduct through "jailbreaking," these advanced systems, capable of processing audio and video inputs, will expose fresh security flaws. It's anticipated that users will find innovative methods to provoke these assistants into exhibiting undesirable actions or revealing potentially offensive or inappropriate traits.
Jared Keller
Lauren Goode
Brendan I. Koerner
Rogers Reece
Recommended for You…
Delivered to your email: Will Knight's Fast Forward delves into the progress of artificial intelligence.
He transferred the contents of a digital currency platform to a USB stick—before vanishing.
Live deepfake love cons are now a reality
Excitement for Boomergasms is
Venturing outside? Discover the top sleeping bags for any journey
Knight, Will
Knight Will
Knight Will
Reece Rogers
Reece Rogers
Reece Rogers
Knight, Will
Author: Kate Knibbs
Additional Content from WIRED
Evaluations and Manuals
© 2024 Condé Nast. All rights reserved. WIRED might receive a share of revenue from items bought via our website, thanks to our affiliate agreements with retail partners. Content from this website cannot be copied, shared, transmitted, stored, or used in any form without explicit written consent from Condé Nast. Ad Choices
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.