Skip to content

Algorithms have controlled welfare systems for years. Now they are under fire for bias

    “People who receive social benefits, reserved for people with disabilities [the Allocation Adulte Handicapé, or AAH] are directly targeted by a variable in the algorithm,” says Bastien Le Querrec, legal expert at La Quadrature du Net. “The risk score for people who receive AAH and who work is increased.”

    Because it also rates single-parent families higher than two-parent families, the groups argue it indirectly discriminates against single mothers, who are statistically more likely to be the sole caregivers. “In the criteria for the 2014 version of the algorithm, the score for beneficiaries separated for less than 18 months is higher,” says Le Querrec.

    Changer de Cap says it has been contacted by both single mothers and people with disabilities seeking help, after being the subject of research.

    The CNAF agency, which is responsible for distributing financial aid including housing, disability and child support, did not immediately respond to a request for comment or to WIRED's question about whether the currently used algorithm had changed significantly since the 2014 version.

    As in France, human rights organizations in other European countries claim that they subject the lowest-income members of society to intense surveillance – often with profound consequences.

    When tens of thousands of people in the Netherlands – including many from the Ghanaian community – were falsely accused of child benefit system fraud, they were not only ordered to pay back the money that the algorithm said had been stolen. Many of them claim they were also left with rising debt levels and destroying credit ratings.

    The problem is not the way the algorithm is designed, but its use in the social security system, says Soizic Pénicaud, lecturer in AI policy at Sciences Po Paris, who previously worked for the French government on the transparency of public sector algorithms. “Using algorithms in the context of social policy carries far more risks than benefits,” she says. “I have not seen a single example in Europe or in the world where these systems have been used with positive results.”

    The case has consequences outside France. Wellbeing algorithms are expected to be an early test of how the EU's new AI rules will be enforced once they come into effect in February 2025. From then on, 'social scoring' – the use of AI systems to evaluate people's behavior and then subject some of them to harmful treatment – ​​will be banned across the bloc.

    “Many of these social security systems that are doing this fraud detection, in my opinion, can do social scoring in practice,” said Matthias Spielkamp, ​​co-founder of the nonprofit Algorithm Watch. Yet public sector representatives are unlikely to agree with that definition – and arguments over how to define these systems are likely to end up in court. “I find this a very difficult question,” says Spielkamp.