AI
News Giants Wage Legal Battle Against AI Startup Perplexity for ‘Hallucinating’ Fake News Content
To go back to this article, go to My Profile and then click on View saved stories.
Legal Action Targets Perplexity for Artificially Generated Misinformation
A recent legal challenge against the emerging company Perplexity alleges that it not only infringes copyright regulations but also breaches trademark laws by fabricating parts of news articles and inaccurately crediting these fabrications to established publishers.
Today, Dow Jones, the company behind The Wall Street Journal, and the New York Post, both subsidiaries of Rupert Murdoch's News Corp, filed a copyright infringement case against Perplexity in the Southern District of New York, USA.
Perplexity has previously encountered issues with news organizations. Just a few weeks ago, The New York Times issued a formal notice to Perplexity, accusing it of unauthorized use of its content. Over the summer, investigations by Forbes and WIRED revealed instances where Perplexity seemed to have copied articles. Following these findings, Condé Nast, the parent company of both Forbes and WIRED, also dispatched formal warnings to Perplexity.
An investigation conducted by WIRED this past summer, which this legal action references, revealed how Perplexity provided incorrect summaries of WIRED's articles. This included a specific case where Perplexity erroneously stated that WIRED covered a story about a police officer in California being involved in a criminal act he was actually innocent of. Earlier today, The Wall Street Journal reported that Perplexity is aiming to secure $500 million in its upcoming round of financing, targeting a valuation of $8 billion.
Dow Jones and the New York Post have highlighted instances where Perplexity, an AI system, reportedly generated completely made-up parts within news articles. In the context of artificial intelligence, "hallucination" refers to the phenomenon where AI models create and output information that is entirely false, treating it as true.
In a highlighted instance, Perplexity Pro initially replicated exactly two sections from an article by the New York Post discussing a disagreement between US senator Jim Jordan and European Union commissioner Thierry Breton regarding Elon Musk and X. Subsequently, it added five original paragraphs discussing the concepts of free speech and internet governance, which were absent in the original publication.
The legal action alleges that incorporating these fabricated sections into genuine journalism and falsely associating them with the Post constitutes trademark dilution, potentially misleading readers. "Confusion's illusions, presented as legitimate news and content from credible sources (under the guise of Plaintiffs’ trademarks), undermine the worth of Plaintiffs' trademarks by introducing doubt and skepticism into the process of news collection and dissemination, in addition to injuring the news-reading audience," the lawsuit asserts.
Perplexity remained silent when approached for a statement.
In an email to WIRED, Robert Thomson, the CEO of News Corp, made a less than favorable comparison between Perplexity and OpenAI. He praised OpenAI for their commitment to integrity and creativity, highlighting these as crucial for fulfilling the promise of Artificial Intelligence. "Unlike Perplexity, which isn't alone in misusing intellectual property, it won't be the last AI firm we actively and meticulously confront," Thomson’s statement read. "Although our preference leans towards engagement over litigation, the protection of our journalists, authors, and the corporation compels us to stand against this theft of content."
OpenAI is currently embroiled in a legal battle over claims of trademark dilution. In a case titled New York Times v. OpenAI, the newspaper accuses OpenAI and its partner Microsoft of tarnishing its image. Specifically, the Times contends that ChatGPT and Bing Chat misattribute fabricated statements to the publication. One instance highlighted in the lawsuit involves Bing Chat incorrectly stating that the Times endorsed red wine, in moderation, as beneficial for heart health, a claim the newspaper asserts it has actually refuted in its reporting.
"Using news articles to run alternative, profit-driven AI services is illegal, a point we emphasized through our correspondence with Perplexity and our legal actions against Microsoft and OpenAI," states Charlie Stadtlander, the New York Times' director of external communications. "We commend the legal action taken by Dow Jones and the New York Post, viewing it as a crucial move in safeguarding publisher materials from such unauthorized use."
Several legal authorities are questioning the effectiveness of pursuing charges related to false designation of origin and trademark dilution. Intellectual property attorney Vincent Allen, a partner with Carstens, Allen & Gourley, is of the opinion that the copyright infringement allegations present in this case are more substantial. He would be "surprised" if the charge concerning false designation of origin is upheld. Allen, along with James Grimmelmann, a digital and internet law professor at Cornell University, points to the significant Dastar v. Twentieth Century Fox Film Corp. case as a potential obstacle. This case, which dealt with a controversy surrounding historical World War II footage, led the Supreme Court to determine that "origin" in trademark law does not cover the authorship but rather is confined to physical items—such as counterfeit merchandise—instead of pirated creative content like movies. Moreover, Grimmelmann expresses doubts about the success of the trademark dilution argument, explaining, "Dilution refers to using a trademark in a manner that diminishes the uniqueness of a well-known mark. I … simply don't see it happening in this instance."
Should publishers succeed in their case that hallucinations can infringe upon trademark rights, AI firms might encounter "significant challenges," says Matthew Sag, who specializes in law and artificial intelligence at Emory University.
"Sag believes that ensuring a language model never produces fabricated content is utterly unfeasible. He argues that due to their design, which involves generating words that seem appropriate in response to given cues, language models are inherently engaged in a form of making things up—though, at times, the results can appear more believable than at other times."
"We label it as a hallucination only when it conflicts with our perception of reality, yet the mechanism behind it remains identical, regardless of whether we approve of the result or not."
Suggested for You…
Delivered to your inbox: A selection of our top stories, curated daily just for you.
Conversation: Bobbi Althoff Discusses Her Path to Wealth and its Extent
An adviser to JD Vance shared numerous posts on Reddit for an extended period regarding substance consumption.
The physician responsible for the "euthanasia capsule" is advocating for artificial intelligence support in life's final moments.
Occasion: Be part of WIRED Health happening on March 18 in London
Additional Content from WIRED
Evaluations and Tutorials
© 2024 Condé Nast. All rights are protected. WIRED might receive a share of revenue from items bought via our website, thanks to our Affiliate Agreements with retail partners. Replicating, sharing, broadcasting, storing, or using the content from this site in any form is prohibited without explicit written consent from Condé Nast. Advertising Options
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.