AI
US National Security Concerns: Are AI Developers Doing Enough to Guard Against Espionage?
To go back to this article, go to My Profile, and then click on Saved stories.
By Paresh Dave
American Security Analysts Caution Major AI Firms on Insufficient Safeguarding Measures
In the previous year, an agreement focusing on safety, involving AI creators such as Google and OpenAI, was finalized with the administration, committing these firms to evaluate potential risks linked to developing software akin to ChatGPT. Presently, an ex-adviser on domestic policy to President Biden, instrumental in orchestrating this agreement, is emphasizing the need for AI firms to intensify efforts in securing their proprietary technologies against espionage, particularly from China.
"Susan Rice remarked that due to their lagging position, China is likely to seek benefit from our advancements," she commented, reflecting on her time after departing from the White House the previous year. During a discussion on AI and geopolitics at a Stanford University event organized by the Institute for Human-Centered AI on Wednesday, she expressed her concerns. "This could mean acquiring and adapting our premier open-source models or illicitly obtaining our top secrets. It's crucial for us to examine the entire range of strategies to maintain our lead. My concern is that we might be falling behind, especially in terms of security."
The apprehensions voiced by Rice, previously serving as the national security adviser under President Obama, are grounded in reality. This was evidenced in March when the US Justice Department brought forth accusations against an ex-Google software developer. The individual is accused of illicitly acquiring confidential information about the company’s TPU AI chips with intentions of utilizing them in China.
At the time, legal authorities cautioned that this might represent only one instance of China's attempts to gain an unfair advantage in what has been described as a competition for supremacy in artificial intelligence. There is apprehension among government representatives and cybersecurity experts that sophisticated AI technologies might be exploited to create highly realistic deepfakes for persuasive misinformation efforts, or to develop formulas for powerful biological weapons.
Not all AI developers and researchers concur on the necessity to secure their code and other elements. A few believe the current models aren't advanced enough to warrant such protection. Organizations like Meta, which are working on open-source AI models, disclose a lot of the information that individuals like Rice recommend safeguarding. Rice admitted that implementing tougher security protocols might disadvantage US firms by reducing the number of contributors enhancing their AI technologies.
Attention towards ensuring the security of AI systems is on the rise. Recently, the American research organization RAND released a study highlighting 38 potential vulnerabilities in AI projects that could lead to the disclosure of sensitive information. These vulnerabilities range from bribery and physical security breaches to the abuse of technical loopholes.
RAND suggested that businesses should motivate employees to notify supervisors of any unusual actions observed among coworkers and restrict access to highly confidential information to a limited number of staff members. The emphasis was on safeguarding the model weights, which are crucial parameters within an artificial neural network adjusted throughout the training process to equip it with valuable capabilities, for example, enabling ChatGPT to answer queries.
In a comprehensive directive on AI issued by President Biden last October, the US National Telecommunications and Information Administration is anticipated to publish a report this year that examines the advantages and disadvantages of maintaining the confidentiality of weights. This directive mandates that entities involved in the creation of sophisticated AI models must inform the US Commerce Department about the "physical and cybersecurity strategies implemented to safeguard those model weights." Moreover, the US is contemplating the imposition of export restrictions to limit the sale of AI technologies to China, as reported by Reuters last month.
Authored by Matt
Authored by Matt
Authored by Megan Farokhmanesh
By Joseph Cox
In its submission to the NTIA before the release of their report, Google has expressed anticipation of heightened efforts aimed at compromising, impairing, misleading, and pilfering its models. However, it reassured that its proprietary technologies are protected by a dedicated team of engineers and researchers renowned for their exceptional skills in security, safety, and reliability. Moreover, Google is developing a structured plan that includes the creation of a specialized committee to oversee the management and distribution of models and their parameters.
Similar to Google, OpenAI communicated to the NTIA that the requirement for open versus closed models varies based on the situation. OpenAI, the creator of technologies like GPT-4 and applications including ChatGPT, recently established a security-focused committee within its board. Furthermore, this week, it shared insights about its technology's security practices on its blog. The blog entry conveyed a desire that this openness would encourage other research facilities to implement safeguarding actions. It left unspecified the entities from which the information required protection.
During a joint appearance with Rice at Stanford, RAND Corporation's chief executive, Jason Matheny, shared her apprehensions regarding security vulnerabilities. He pointed out that the United States' implementation of export controls to restrict China's access to advanced computer chips has significantly hindered Chinese developers' efforts to build their own models. According to Matheny, this limitation has escalated China's propensity to directly pilfer AI software. He highlighted the cost-effectiveness for China in launching cyberattacks to expropriate AI model weights, which could potentially cost an American firm up to hundreds of billions of dollars to develop, against spending merely a few million dollars on such cyber espionage activities. Matheny expressed concern over the insufficient national investment towards addressing this critical issue, emphasizing its difficulty and importance.
The Chinese embassy in Washington, DC, has yet to reply to WIRED's inquiries for a response regarding allegations of theft. However, it has previously dismissed similar accusations as unfounded attacks by Western authorities.
Google has reported to authorities regarding the situation that led to the American lawsuit accusing someone of stealing AI chip technology for China's benefit. Despite Google's claims of having robust measures in place to protect its exclusive information, legal documents reveal that it was a lengthy process for Google to identify Linwei Ding, a citizen of China who has entered a plea of not guilty to the charges imposed by the government.
Leon, an engineer, joined the team in 2019 to contribute to the development of software for Google's advanced data centers, as stated by the legal authorities. Beginning in 2022 and continuing for roughly a year, he is accused of transferring over 500 files containing sensitive data to his personal Google account. According to documents filed in court, his method involved using the Notes app on his work laptop, provided by Apple, to copy the information. He then converted these files into PDF format and uploaded them to other locations, effectively bypassing Google's security measures designed to detect such unauthorized data transfers.
During the purported theft, the United States alleges that the worker communicated with the CEO of a Chinese AI startup and had initiated the process of founding his own AI enterprise in China. If found guilty, he could be sentenced to a maximum of 10 years behind bars.
Explore Election Season Through…
Dive into the election period with our exclusive WIRED Politics Lab newsletter and podcast.
Unconvinced that breakdancing qualifies as an Olympic sport? The global champion concurs (sort of)
Investigators unlocked a decade-old encryption key for a cryptocurrency wallet valued at $3 million
The remarkable emergence of the globe's inaugural AI beauty contest
Ease the strain on your spine: Discover the top desk chairs we've evaluated.
Knight Will
Aarian Marshall
Louise Matsakis
Name: Louise Matsakis
Caroline Haskins
Steven Levy
Knight Will
Knight William
Additional Content from WIRED
Critiques and Manuals
© 2024 Condé Nast. All rights reserved. Purchases made through our site may generate revenue for WIRED as part of our affiliate agreements with retailers. Content from this site cannot be copied, shared, broadcasted, stored, or utilized in any form without explicit consent from Condé Nast. Advertising Choices.
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.