AI
China Leads the Charge in AI Content Regulation with New Watermark Mandate to Combat Misinformation
To look at this article again, go to My Profile and then click on View saved stories.
China's Strategy for Implementing AI Content Identification
The Chinese authorities are pushing for the use of audio Morse codes, coded metadata, or tags within scenes created by virtual reality as methods for AI firms and social media networks to accurately mark content produced by artificial intelligence. This initiative aims to combat misinformation by ensuring that AI-generated materials are clearly identified.
On September 14, the Cyberspace Administration of China proposed a new rule designed to clarify for users whether content is authentic or produced by artificial intelligence. With the growing sophistication of generative AI technologies, distinguishing between AI-created content and real material has become challenging, leading to significant problems such as unauthorized explicit content and misinformation in politics.
China isn't the only government addressing this matter. The European Union's AI Act, which was implemented in March, mandates comparable disclosures. Additionally, California enacted a similar legislation recently. Prior AI policies in China had also briefly touched upon the necessity for general AI labeling.
The newly introduced policy provides an in-depth description regarding the application of AI watermarks by various platforms. It marks the initial occasion where it has been declared that social media networks will face penalties if AI-created content is disseminated widely without appropriate categorization. Consequently, AI firms and social media entities now face increased financial and legal risks should they opt to bypass the implementation of adequate labeling mechanisms.
China aims to take the lead in setting the international standards for AI regulation through quick and forward-thinking policies. Angela Zhang, a law professor at the University of Southern California who focuses on Chinese technology regulations, notes, "China is clearly in the lead compared to the EU and the USA in the area of AI content moderation, a move largely influenced by the government's desire to maintain political conformity in chatbot interactions." She further adds that China now sees an opportunity to influence worldwide industry norms, especially in the area of labeling, which she believes holds potential for global agreement on specific technical standards.
Regulating Artificial Intelligence Proves Challenging
Initially, the proposed rules require providers of AI services to clearly mark AI-generated content. This includes placing watermarks on pictures, visible alert labels at the beginning of AI-created videos or virtual reality environments, or the morse code signal for “AI” (· – · ·) playing before or after an audio clip produced by AI.
To varying extents, these actions are already being implemented within the industry. However, the proposed laws would transform these optional practices into mandatory legal responsibilities, compelling AI technologies with inadequate labeling systems to improve or risk facing sanctions from the authorities.
However, the issue with clear markings is that they can be effortlessly modified, such as by removing a watermark or cutting off the end of a video. Therefore, the law also mandates that firms embed subtle tags within the metadata of files created by AI, which must specify the abbreviation "AIGC" along with encrypted details regarding the entities that developed and disseminated the file. Furthermore, it advises these organizations to incorporate unseen watermarks into the material, making them undetectable to viewers.
In practice, adding implicit tags to metadata would necessitate increased collaboration and compliance with standardized regulations across numerous businesses.
"Creating standards for metadata that function effectively across various AI models, deployment systems, and platforms is a highly ambitious goal, presenting significant technical hurdles and financial implications for implementation," states Sam Gregory, the executive director of Witness, a human rights group based in New York. He believes that achieving this will span years rather than months.
However, arguably the most difficult aspect of the Chinese regulation is its requirement for social media platforms to identify content created by artificial intelligence. The regulation mandates that "online information content transmission platform services" scrutinize shared files for embedded labels and signs of AI creation. These platforms are required to attach a gen-AI label/tag if the metadata indicates it, if the uploader willingly reveals this information, or if the platform believes the content to be AI-generated. Additionally, they must enhance the metadata with their own details to trace the content's journey across the internet.
This situation leads to numerous emerging difficulties. Primarily, there is ambiguity regarding which platforms will fall under the category of "online information content transmission platform services" as defined by the legislation. "In broad terms, it's expected that social media networks such as Douyin, WeChat, and Weibo will be included, yet it remains uncertain if e-commerce sites like Taobao and JD, as well as search engines such as Baidu, will also be encompassed," Jay Si, a partner at Zhong Lun Law Firm in Shanghai, explains.
Currently, leading short-form video platforms in China enable users to label their uploads as AI-created at the time of posting. Additionally, these platforms offer the option for users to report other videos that haven't been marked as such, identifying them as potentially AI-generated. They then attach a warning stating, “The content is suspected of being generated by AI.”
The mandate for obligatory screening of all content on the platform significantly alters the dynamics, especially given their vast user base spanning across and beyond China. Si mentions, "The requirement for WeChat or Douyin to inspect each uploaded image to verify its AI generation origin would impose a colossal strain on both the operational and technical resources of the company." When approached for remarks on this matter, Douyin and Kuaishou, two leading social video platforms in China, opted not to respond.
China Aims to Surpass the EU's AI Legislation
The AI Act of the European Union, often regarded as the most thorough legal structure for AI regulation to date, includes a provision concerning the labeling of content. This mandates that the "results produced by the AI system must be tagged in a format that machines can read and identified as being created or altered by artificial means." Furthermore, it obliges companies to clearly state if the content includes deepfake images or textual content that pertains to matters of public concern.
Businesses are beginning to regulate digital material. "Several companies in the West are embracing the C2PA standard, a metadata-driven provenance framework that reveals the methods through which AI has been applied in creating content," Gregory states. The Coalition for Content Provenance and Authenticity (C2PA) is backed by major names such as Google, Meta, Microsoft, and OpenAI. This initiative is a positive move, according to Gregory, although it has not gained widespread adoption and numerous platforms are yet to implement it.
Jeffrey Ding, an assistant professor of Political Science at George Washington University, suggests that Chinese authorities probably took cues from the EU AI Act. He notes that Chinese officials and academics have previously acknowledged using the EU’s legislation as a model for their own initiatives.
However, certain actions implemented by Chinese authorities might not be feasible elsewhere. For instance, China mandates that social media platforms monitor content uploaded by users for AI involvement. "This approach appears to be quite novel and possibly specific to China," Ding observes. "Such a policy would be unthinkable in the United States, where it's a well-known principle that platforms are not held liable for the content they host."
What About Online Expression Freedom?
The proposed rule on labeling AI-generated content is open for community input until October 14, and it could be a few more months before it's revised and enacted. However, there's hardly any cause for delay among Chinese firms in getting ready for its implementation.
Sima Huapeng, the creator and chief executive of Silicon Intelligence, a Chinese firm specializing in AIGC through the use of deepfake technology for creating AI representatives, influencers, and reproductions of both deceased and living individuals, has stated that currently, his technology gives users the option to label the generated content as AI if they choose. However, he mentioned that this could become compulsory if new legislation is enacted.
"Sima notes that companies tend to avoid incorporating optional features into their products. However, if those features are mandated by legislation, firms are obligated to adopt them. She points out that integrating watermarks or metadata tags isn't inherently challenging, yet it does raise the operational expenses for businesses that adhere to these requirements."
He argues that such regulations could deter AI from being exploited for fraudulent activities or breaching privacy. However, they might also prompt the emergence of an underground market for AI services, where businesses attempt to evade legal obligations and reduce expenses.
There's a delicate balance to be maintained between ensuring AI content creators are responsible and regulating personal expression by implementing advanced tracking.
Gregory highlights a significant challenge concerning human rights: ensuring that these methods do not infringe upon privacy or freedom of speech. Although hidden labels and watermarks are effective in pinpointing the origins of false information and unsuitable content, they also grant more power to both platforms and the government to regulate online posts. Indeed, the fear of AI technologies becoming uncontrollable has spurred China to take the lead in implementing stringent AI regulations.
Simultaneously, the AI sector in China is lobbying the government for increased freedom to innovate and expand, given its lag behind Western counterparts. The initial version of a Chinese law focused on generative AI saw significant dilution from its first draft to the enacted legislation, easing mandates for identity verification and lessening the severity of sanctions for corporations.
"Ding observes that the Chinese authorities are delicately balancing their efforts to keep control over content while simultaneously allowing AI research centers crucial to strategic areas enough leeway to explore and innovate. He views this as yet another move to achieve that equilibrium."
Check This Out…
Delivered to your email: A selection of the finest and most unusual tales from the vaults of WIRED.
Elon Musk poses a threat to national security
Discussion: Meredith Whittaker Aims to Challenge Capitalist Ideologies
What's the solution for an issue akin to Polestar?
Occasion: Come along to The Major Interview happening on the 3rd of December in San Francisco.
Additional Content from WIRED
Evaluations and Instructions
© 2024 Condé Nast. All rights reserved. Purchases made through our site may result in WIRED receiving a share of the sale as part of our Affiliate Agreements with retail partners. Content from this site cannot be copied, shared, broadcast, stored, or used in any form without explicit consent from Condé Nast. Advertising Choices
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.