AI
Calling All Citizens: The Nationwide Hunt for AI Vulnerabilities Begins
To look at this article again, go to My Profile and then click on Saved stories to see it.
The US Government is Calling on Citizens to Identify Weaknesses in Generative AI
During the Defcon hacker gathering in Las Vegas in 2023, leading AI technology firms joined forces with groups focused on algorithm fairness and openness to challenge the thousands of participants to test and uncover vulnerabilities in generative AI technologies. This initiative of "red-teaming," supported by the US government, aimed to bring these powerful yet not fully transparent technologies under closer examination. Building on this initiative, Humane Intelligence, a nonprofit dedicated to ethical AI and algorithm evaluation, has announced a new initiative. In collaboration with the US National Institute of Standards and Technology, they have issued an open invitation to all US residents to join the initial phase of a comprehensive red-teaming campaign aimed at scrutinizing AI-powered office productivity tools.
The preliminary round will be conducted virtually and is accessible to programmers as well as the wider community, under the umbrella of the National Institute of Standards and Technology's initiatives on AI, dubbed Assessing Risks and Impacts of AI, or ARIA. Those who succeed in the initial phase will participate in a live red-teaming session during the last week of October at the Conference on Applied Machine Learning in Information Security (CAMLIS) in Virginia. The objective is to enhance the ability to perform thorough evaluations of generative AI technologies in terms of their security, robustness, and ethical considerations.
"The typical user of these models lacks the capacity to judge if the model serves its intended purpose," Theo Skeadas, the chief of staff at Humane Intelligence, notes. "Therefore, our goal is to make the evaluation process accessible to all, ensuring that users can independently verify if the model fulfills their requirements."
The concluding activity at CAMLIS will divide attendees into two groups: a red team aiming to breach the AI systems and a blue team focused on protecting them. Attendees will employ the AI 600-1 profile, a component of NIST's framework for managing AI risks, as a standard to determine if the red team can generate results that deviate from the systems' anticipated operations.
“ARIA, a project under NIST, is leveraging organized feedback from users to grasp how AI models are applied in practical scenarios,” explains Rumman Chowdhury, the creator of Humane Intelligence, who serves as a contractor at NIST's Office of Emerging Technologies and sits on the US Department of Homeland Security's board for AI safety and security. “The team behind ARIA primarily consists of specialists in the sociotechnical assessment and testing field, utilizing their expertise to advance the domain towards a stringent scientific examination of generative AI.”
Chowdhury and Skeadas have revealed that their collaboration with NIST is the first among numerous upcoming announcements of AI red team projects Humane Intelligence is embarking on with various US government bodies, global governments, and non-governmental organizations. Their objective is to significantly increase transparency and accountability among entities creating currently opaque algorithms by introducing initiatives such as "bias bounty challenges." These initiatives will incentivize people to identify and report issues and biases within AI systems.
Skeadas believes that the group involved in the examination and assessment of these systems should extend beyond just developers. It's crucial to include not only policymakers and journalists but also members of civil society and those without technical expertise. Additionally, efforts must be made to ensure the inclusion of underrepresented communities, such as speakers of minority languages or individuals hailing from non-dominant cultural backgrounds and viewpoints, in this process.
Suggestions for You …
Directly to your email: A selection of our top stories, carefully chosen for you daily.
A faulty update from CrowdStrike led to a global computer meltdown
The Major Headline: When Could the Atlantic Ocean Possibly Fracture?
Introducing the age of excessive online consumption
Additional Content from WIRED
Critiques and Tutorials
© 2024 Condé Nast. All rights are protected. WIRED might receive a share of revenue from items bought via our website, thanks to our Affiliate Agreements with merchants. The content on this website cannot be copied, shared, broadcasted, stored, or utilized in any form without explicit consent from Condé Nast. Advertising Options.
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.