AI
US AI Secrets at Risk: National Security Experts Urge for Stronger Protections Against Espionage
To look over this article again, go to My Profile and then check out the stories you've saved.
Since the original text by Pa
American National Security Authorities Caution AI Powerhouses on Inadequate Safeguarding Measures
In the previous year, an important agreement focused on security measures was reached between the White House and AI creators, featuring key players such as Google and OpenAI. They committed to assessing potential risks associated with the development of sophisticated software akin to ChatGPT. However, an ex-adviser on domestic policy to President Biden, instrumental in crafting this agreement, now emphasizes the urgent need for AI firms to enhance their defense strategies, particularly against espionage efforts by China.
"Susan Rice pointed out that due to their lagging position, China is likely to seek benefits from our advancements," she mentioned, having exited her White House role the previous year. During a discussion on AI and global politics at a Stanford University’s Institute for Human-Centered AI event this Wednesday, she expressed her concerns. "This could mean buying and altering our top open-source models, or illicitly obtaining our most guarded secrets. It's crucial we consider all the ways we can maintain our lead. My concern is that we might be falling behind in terms of security."
The apprehensions expressed by Rice, previously serving as the National Security Advisor under President Obama, are grounded in reality. In March, the US Justice Department disclosed indictments against a one-time Google software engineer accused of illicitly taking confidential information about the company's TPU AI chips with intentions of utilizing them in China.
At the time, legal authorities cautioned that this might represent only one instance of China's attempts to engage in unfair competition within the so-called AI arms race. Government representatives and cybersecurity experts are concerned that sophisticated AI technologies might be exploited to create realistic deepfakes for persuasive misinformation efforts, or to devise formulas for powerful biological weapons.
Not all AI developers and researchers are convinced that their work requires protection. A number of them believe that current models aren't advanced enough to warrant such security measures. Furthermore, organizations like Meta, which are working on open-source AI models, openly share a lot of their work, contrary to what officials like Rice might advocate for in terms of keeping a tight grip on their developments. Rice admits that implementing more rigorous security protocols could potentially hinder the progress of US companies by reducing the number of individuals contributing to the enhancement of their AI technologies.
Attention towards ensuring the security of AI models seems to be growing. Recently, the American research organization RAND released a study highlighting 38 potential vulnerabilities in AI projects that could lead to the exposure of sensitive information, such as through corruption, unauthorized access, and the manipulation of hidden vulnerabilities.
RAND suggested that businesses should motivate employees to alert authorities about unusual activities observed among coworkers and restrict access to highly confidential information to a select group of staff members. The emphasis was placed on protecting the so-called model weights, which are the parameters within an artificial neural network adjusted throughout the training process to equip it with capabilities, for example, ChatGPT's skill in answering queries.
In an extensive executive directive on AI issued by President Biden in the previous October, it's anticipated that the US National Telecommunications and Information Administration will publish a report this year examining the advantages and disadvantages of maintaining confidentiality over model weights. This mandate already obliges firms engaged in creating sophisticated AI models to inform the US Commerce Department about the “physical and cybersecurity strategies implemented to safeguard those model weights.” Additionally, according to a report by Reuters last month, the US is contemplating the imposition of export restrictions to limit AI transactions with China.
By Matt Burgess
Authored by Matt
Authored by Megan Farokhmanesh
Authored by Joseph
In remarks submitted to the NTIA before its publication, Google anticipated a rise in efforts to interfere with, damage, mislead, and pilfer models. However, the company also highlighted that its confidential information is protected by a team specializing in security, safety, and reliability, which includes engineers and researchers of the highest caliber. Additionally, Google mentioned it is developing a structure that would include a panel of specialists to oversee the distribution and control of models and their parameters.
Similar to Google, OpenAI, the creator behind models like GPT-4 and applications including ChatGPT, communicated to the NTIA that the necessity for open versus closed models varies with the situation. Recently, OpenAI established a dedicated security committee within its board and shared insights on its blog regarding the security protocols for its training models. The aim of this transparency, as mentioned in the blog, is to encourage other research facilities to implement safeguarding strategies. However, the blog did not detail the specific threats from which these secrets require protection.
At a Stanford event with Rice, RAND Corporation's head, Jason Matheny, shared her apprehensions regarding security weaknesses. He discussed how the US's implementation of export restrictions on advanced computer chips has curtailed the capabilities of Chinese developers in creating their own innovative models. According to Matheny, this limitation has propelled China towards directly appropriating AI software. Matheny believes that for China, the investment of a few million dollars in a cyberattack aimed at acquiring AI model weights—potentially setting back an American firm by hundreds of billions of dollars—is a strategic move. He emphasized the significant challenge and crucial need for more national investment in securing these technologies, stating, "It’s really hard, and it’s really important, and we’re not investing enough nationally to get that right."
The Chinese embassy in Washington, D.C., did not promptly reply to WIRED's inquiry for a response regarding allegations of theft, but has previously characterized such accusations as unfounded disparagements by Western authorities.
Google has reported alerting authorities regarding the incident that led to the US accusing someone of stealing AI chip technology secrets on behalf of China. Despite the company's insistence on having robust measures in place to protect its confidential information, legal documents indicate it took Google a significant amount of time to identify the accused, Linwei Ding, a citizen of China, who has denied the allegations through a not guilty plea to the federal accusations.
Leon, an engineer employed since 2019 to develop software for Google's advanced data centers, is accused by prosecutors of transferring over 500 files containing sensitive data to his personal Google account over a year beginning in 2022. According to legal documents, he successfully bypassed Google's security measures designed to detect such unauthorized data transfers by first entering the information into the Notes app on his work laptop provided by Apple, then converting these files into PDF format, and subsequently uploading them to external platforms.
During the purported theft, it is claimed by the US that the worker was communicating with the CEO of a Chinese AI startup and had initiated steps to launch his own AI firm in China. Should he be found guilty, he could be sentenced to a maximum of 10 years behind bars.
Explore Election Season Through Our WIRED Politics Lab Newsletter and Podcast
Unconvinced that breakdancing qualifies as an Olympic discipline? The global champion shares your skepticism (to some extent).
Researchers deciphered a decade-old password, unlocking a cryptocurrency wallet valued at $3 million.
The mysterious emergence of the globe's inaugural artificial intelligence beauty contest
Ease the strain on your spine: Discover the top office chairs we've evaluated.
Knight Will
Not provided with text
Louise Matsakis
Louise Matsakis
Caroline Haskins
Steven Levy
Knight Will
Knight Will
Additional Content from WIRED
Critiques and Instructions
Copyright © 2024 Condé Nast. All rights reserved. Purchases made via our website may result in WIRED receiving a share of the sale through our affiliate agreements with retail partners. Reproduction, distribution, transmission, storage, or any other use of the content on this site is strictly prohibited without the express written consent of Condé Nast. Advertising Choices.
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.