AI
Facing Legal Challenge: France’s Welfare Algorithm Accused of Bias Against the Disabled and Single Mothers
To review this article again, go to My Profile and then select View saved stories.
For years, algorithms have been used to oversee welfare systems, but they're now being criticized for discrimination.
Today, a group of organizations advocating for human rights initiated a lawsuit against the French government for its deployment of algorithms aimed at identifying inaccuracies in welfare payments, claiming they unfairly target disabled individuals and single mothers.
The groups involved in the lawsuit, totaling 15, including La Quadrature du Net, a digital rights organization, Amnesty International, and Collectif Changer de Cap, a French collective advocating against inequality, contend that the algorithm, in use since the 2010s, breaches privacy regulations in Europe as well as France's laws against discrimination.
"Valérie Pras from Collectif Changer de Cap notes that a legal case in France involving a public algorithm is unprecedented. She expresses a desire for the prohibition of such algorithms, pointing out that various social groups in France employ scoring algorithms aimed at the impoverished. Pras believes that if they manage to outlaw this particular algorithm, it will set a precedent that could lead to the banning of similar algorithms."
According to a lawsuit submitted to France's highest administrative tribunal on October 15, the CNAF, a French social security organization, scrutinizes the private information of over 30 million individuals. This includes not only those applying for state benefits but also their cohabitants and relatives.
The algorithm assigns individuals a score ranging from 0 to 1, utilizing their personal data to calculate the probability that they are improperly receiving payments, whether through fraudulent means or unintentional errors.
France joins several other nations in employing algorithms to identify mistakes or fraudulent activities within its social welfare programs. Last year, a collaborative investigation by WIRED and Lighthouse Reports spanning three parts delved into the application of fraud-detection algorithms within the welfare systems of the Netherlands, Denmark, and Serbia.
Individuals identified as having elevated risk scores may subsequently undergo what beneficiaries of welfare throughout the region have characterized as anxiety-inducing and invasive probes. These examinations may also lead to the halting of their welfare benefits.
The documentation concerning the French algorithm suggests that the operations carried out by the CNAF amount to extensive monitoring and an excessive infringement upon privacy rights. It notes that the impact of these algorithmic operations is especially detrimental to individuals in vulnerable situations.
The CNAF has kept the coding of its current model for identifying incorrect welfare payments confidential. However, after examining previous iterations of the algorithm believed to have been active up to 2020, La Quadrature du Net alleges that the model unfairly targets vulnerable populations, assigning higher risk scores to individuals with disabilities, among others.
"Bastien Le Querrec, a legal specialist at La Quadrature du Net, points out that individuals receiving the Allocation Adulte Handicapé (AAH), a financial support designed for those with disabilities, are specifically identified by a certain element within the algorithm. He notes that the algorithm assigns a higher risk score to recipients of the AAH who are also employed."
Organizations contend that the system inherently biases against single mothers, who often are the primary caregivers, by assigning higher ratings to single-parent households compared to dual-parent ones. "According to the parameters set in the 2014 edition of the algorithm, individuals who have undergone a divorce within the last 18 months receive an elevated score," explains Le Querrec.
Changer de Cap reports that it has received requests for assistance from single mothers and individuals with disabilities, following an investigation.
The agency responsible for managing financial assistance, such as for housing, disability, and child support, CNAF, did not promptly reply to an inquiry for feedback or answer WIRED's query on whether there has been a substantial modification to the algorithm utilized since its 2014 edition.
Similar to the situation in France, advocacy organizations for human rights in various European nations contend that individuals with the least financial resources are subjected to rigorous monitoring, frequently resulting in significant repercussions.
In the Netherlands, a situation unfolded where a significant number of individuals—predominantly from the Ghanaian community—were mistakenly identified as culprits of child benefits fraud. This issue didn't merely result in demands for reimbursement of the purportedly misappropriated funds as determined by the algorithm. According to reports from those affected, the repercussions extended to accumulating debts and severely damaged credit scores.
The issue doesn't lie in the algorithm's design itself, but rather in how they are implemented within welfare systems, according to Soizic Pénicaud, an AI policy instructor at Sciences Po Paris with prior experience in promoting public sector algorithm transparency for the French government. “The adoption of algorithms within social policy frameworks is fraught with more dangers than advantages,” she states. “To date, I've not encountered a single instance globally or in Europe where the application of these systems has yielded beneficial outcomes.”
The situation extends its impact beyond French borders. It is anticipated to serve as a preliminary examination of the enforcement of the European Union's forthcoming AI regulations, set to commence in February 2025. Starting that time, the practice of "social scoring" – employing artificial intelligence to assess individuals' conduct and subsequently imposing negative consequences on some – will be prohibited throughout the EU member states.
"According to Matthias Spielkamp, cofounder of Algorithm Watch, a nonprofit organization, numerous welfare systems engaged in fraud detection could essentially amount to social scoring. However, it's probable that individuals representing the public sector may contest this viewpoint, leading to debates over the definition of these systems that could eventually reach the judiciary. "This poses a challenging issue," Spielkamp notes.
Recommended for You…
Direct to your email: A selection of the most outstanding and peculiar tales from the vault of WIRED.
Elon Musk poses a threat to national security
Discussion: Meredith Whittaker Takes on the Challenge of Disproving Capitalism
What's the solution to a challenge such as Polestar?
Occasion: Be part of The Major Interview happening on December 3rd in San Francisco.
Additional Coverage from WIRED
Evaluations and Tutorials
© 2024 Condé Nast. All rights reserved. WIRED might receive a share of revenue from items bought via our website, which is part of our collaboration with retail affiliates. Content from this website cannot be copied, shared, broadcast, stored, or used in any form without explicit consent from Condé Nast. Ad Choices
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.