AI
Global Influence Campaigns Grapple with AI Learning Curve, OpenAI Report Reveals
To go back to this article, navigate to My Profile and then click on View saved stories.
OpenAI Publishes Report on Misuse of AI in Global Influence Efforts
OpenAI has unveiled a report today, highlighting attempts by entities from Russia, Iran, China, and Israel to exploit its AI technology for global influence campaigns. The document identifies and discusses the closure of five networks between 2023 and 2024, including Russia’s Doppleganger and China’s Spamoflauge, which have been exploring the use of generative AI to streamline their activities. However, the report also indicates that these efforts have not been particularly successful.
It’s somewhat comforting to know that these individuals haven’t completely harnessed generative AI to turn into unrivaled vectors of misinformation. However, the fact that they are actively exploring its potential is cause for concern.
The report from OpenAI indicates that efforts to use generative AI for influence campaigns are encountering obstacles, mainly because it does not consistently generate high-quality text or programming. It has difficulty with idioms, essential for making language appear more authentically human and engaging, and often fumbles with fundamental grammar rules. This issue was so prevalent that OpenAI humorously dubbed one of its networks “Bad Grammar.” The ineptitude of the Bad Grammar network was highlighted when it accidentally disclosed its artificial nature by stating, “As an AI language model, I am here to assist and provide the desired comment.”
A network utilized ChatGPT to refine coding aimed at enabling automatic postings on Telegram, a messaging application popular among extremists and influence groups. At times, this strategy proved effective, but on other occasions, it resulted in the account behaving as if it were two distinct entities, thus revealing its strategy.
In different instances, ChatGPT was employed to generate programming and material for websites and social platforms. For example, Spamoflauge utilized ChatGPT to troubleshoot and write code for a WordPress site that disseminated articles targeting individuals from the Chinese diaspora who expressed criticism towards the nation’s administration.
The study revealed that the content created by artificial intelligence failed to extend beyond the confines of its original networks to gain traction in the broader public sphere, despite being disseminated on popular platforms such as X, Facebook, and Instagram. This situation applied to operations conducted by an Israeli firm apparently operating on a contract basis, which published material varying from opposition to Qatar to criticism of the BJP, the Hindu-nationalist party presently governing India.
Overall, the study portrays a series of poorly executed campaigns using unsophisticated propaganda, which appears to mitigate concerns raised by numerous specialists regarding the ability of this emerging technology to disseminate misleading and false information, especially in a significant election year.
Social media influence campaigns are constantly evolving to stay under the radar, mastering the ins and outs of platforms, occasionally even more adeptly than the platforms’ own staff. Jessica Walton, a researcher at the CyberPeace Institute who has looked into Doppleganger’s employment of generative AI, notes that although these early efforts might be minor or unsuccessful, they seem to be in a phase of experimentation.
In her study, the network utilized Facebook profiles that appeared authentic to share posts, frequently focusing on polarizing political subjects. “The articles themselves are produced by artificial intelligence,” she notes. “Primarily, their objective is to test what gets through, what Meta’s algorithms can and cannot detect.”
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.