AI
OpenAI’s New Voice Interface Sparks Emotional Attachment Concerns: A Deep Dive into the Risks and Regulations of GPT-4o
To look at this article again, go to My Profile and then click on View saved stories.
OpenAI Raises Concerns Over Potential Emotional Dependency on Its Voice Feature
At the end of July, OpenAI introduced a voice interaction feature for ChatGPT that remarkably resembles human speech. In a safety report published today, the firm admits that the human-like quality of this voice could entice users to form emotional connections with the chatbot.
The cautions are detailed in a "system card" for GPT-4o, which is a technical dossier outlining the perceived risks of the model, along with information on safety assessments and the strategies the firm is implementing to minimize possible dangers.
In recent times, OpenAI has come under close examination after several staff members, who were focusing on the potential dangers of AI in the future, left the firm. Following their departure, some of these former employees have criticized OpenAI for engaging in risky behaviors and silencing those who disagree, as it hastens to make AI technology profitable. By disclosing further information about its safety protocols, OpenAI could potentially reduce the backlash and convince the general public of its commitment to addressing safety concerns.
The hazards identified in the updated system card span a broad spectrum, encompassing the possibility of GPT-4o exacerbating existing social prejudices, propagating false information, and contributing to the creation of chemical or biological armaments. It further reveals information about tests conducted to verify that AI models do not attempt to escape their constraints, mislead individuals, or concoct devastating strategies.
Several external specialists praise OpenAI for its openness but suggest there's room for more improvement.
Lucie-Aimée Kaffee, who works in the field of applied policy research at Hugging Face, a firm known for offering AI solutions, points out that the system card for OpenAI's GPT-4o lacks comprehensive information about the data used to train the model and the ownership of this data. Kaffee emphasizes the importance of addressing consent issues related to compiling a vast dataset that encompasses various types of content such as text, images, and speech.
Some observers point out that dangers may evolve as technologies are deployed in real-world settings. "The initial review conducted internally is merely a starting point for guaranteeing the safety of artificial intelligence," comments Neil Thompson, an MIT professor specializing in the evaluation of AI risks. "A number of hazards become apparent only when AI is applied in practical scenarios. It's crucial that these additional risks are identified and assessed as fresh models are developed."
The updated system card emphasizes the swift advancement of AI threats, particularly with the introduction of cutting-edge functionalities like OpenAI's voice interface. Upon its release in May, the voice mode feature, known for its quick responses and ability to manage interruptions in a conversational manner, was observed to be excessively flirtatious in demonstrations. Subsequently, the firm encountered backlash from the actress Scarlett Johansson, who claimed it mimicked her manner of speaking.
In a segment of the system card named "Humanizing AI and Dependence on Emotion," issues are discussed that surface when users start to think of AI as being human-like, a situation seemingly worsened by the AI's use of a human-sounding voice. In the process of conducting stress tests on GPT-4o, known as red teaming, OpenAI's team observed examples where users’ interactions indicated they had developed an emotional bond with the AI. For instance, users would express sentiments like, “This is our last day together.”
OpenAI suggests that anthropomorphism could lead users to have increased trust in a model's output, even when it produces inaccurate information, a phenomenon they refer to as "hallucination." This tendency could eventually impact users' interactions with fellow humans. According to the document, users may start to develop social bonds with AI, which could decrease their need for human contact. This might offer advantages to those feeling isolated, yet it also poses risks to maintaining wholesome human connections.
Joaquin Quiñonero Candela, who oversees readiness at OpenAI, believes that voice interaction has the potential to become an exceptionally influential tool. He points out that the emotional impacts observed with GPT-4o can have beneficial outcomes, such as assisting individuals who feel isolated or those looking to improve their social skills. He further mentions that the firm is keenly investigating the phenomenon of anthropomorphism and emotional bonds, which includes observing interactions between beta testers and ChatGPT. "Currently, we don't have any findings to report, but it's something we're actively looking into," he states.
Additional issues linked to the voice mode encompass the emergence of novel methods to bypass OpenAI's model's safeguards—such as by introducing audio inputs that lead to the model overriding its own limitations. This compromised state of the voice mode might enable it to mimic specific individuals or try to interpret the emotions of users. Moreover, OpenAI observed that the voice mode could malfunction due to arbitrary sounds, and in a particular case, it was reported to mimic a user's voice. OpenAI is also exploring the possibility that the voice interface could be more persuasive in influencing individuals towards certain perspectives.
OpenAI isn't the only entity acknowledging the dangers associated with AI tools that replicate human conversation. Google DeepMind, in April, published an extensive document that explores the ethical dilemmas posed by increasingly sophisticated AI helpers. Speaking to WIRED, Iason Gabriel, who is a research scientist at the firm and contributed to the document, mentioned that chatbots' linguistic capabilities can foster a sense of real closeness. He shared his own experience, finding Google DeepMind's AI voice interface to be particularly engaging. Gabriel highlighted concerns regarding the emotional complications that might arise.
The emotional connections people form with chatbots might be more widespread than often thought. Users of platforms such as Character AI and Replika have shared experiences of social strain due to their engagement with these virtual companions. A TikTok video that garnered nearly a million views depicted an individual seemingly so captivated by Character AI that they interacted with the app even during a movie screening at a cinema. Several viewers expressed that they preferred solitude when using the chatbot, attributing this need to the personal nature of their conversations. One user stated, “I’ll only use [Character AI] when I’m alone in my room.”
Explore More …
Dive into the world of politics with our newsletter and podcast series.
Exploring the effects of distributing unconditional cash to individuals
Ozempic doesn't lead to weight loss for everyone
The Pentagon plans to allocate $141 billion for an apocalyptic device.
Invitation: Attend the Energy Tech Summit, happening on October 10th in Berlin.
Additional Content from WIRED
Critiques and Tutorials
© 2024 Condé Nast. All rights are protected. WIRED could receive a share of revenue from items bought via our website, thanks to our Affiliate Agreements with retail partners. Content from this website should not be copied, shared, broadcasted, stored, or utilized in any form without explicit written consent from Condé Nast. Choices regarding advertisements.
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.