Views: 4282
Amnesty International is calling on Sweden’s social insurance agency to immediately discontinue its machine learning-based welfare system, following an investigation by Lighthouse Reports and Svenska Dagbladet that found it to be discriminatory
By
Sebastian Klovig Skelton, Data & ethics editor
Published: 29 Nov 2024 16:00
Sweden’s algorithmically powered welfare system is disproportionately targeting marginalised groups in Swedish society for benefit fraud investigations, and must be immediately discontinued, Amnesty International has said.
An investigation published by Lighthouse Reports and Svenska Dagbladet (SvB) on 27 November 2024 found that the machine learning (ML) system being used by Försäkringskassan, Sweden’s Social Insurance Agency, is disproportionally flagging certain groups for further investigation over social benefits fraud, including women, individuals with “foreign” backgrounds, low-income earners and people without university degrees.
Based on an analysis of aggregate data on the outcomes of fraud investigations where cases were flagged by the algorithms, the investigation also found the system was largely ineffective at identifying men and rich people that actually had committed some kind of social security fraud.
To detect social benefits fraud, the ML-powered system – introduced by Försäkringskassan in 2013 – assigns risk scores to social security applicants, which then automatically triggers an investigation if the risk score is high enough.
Those with the highest risk scores are referred to the agency’s “control” department, which takes on cases where there is suspicion of criminal intent, while those with lower scores are referred to case workers, where they are investigated without the presumption of criminal intent.
Once cases are flagged to fraud investigators, they then have the power to trawl through a person’s social media accounts, obtain data from institutions such as schools and banks, and even interview an individual’s neighbours as part of their investigations. Those incorrectly flagged by the social security system have complained they then end up facing delays and legal hurdles in accessing their welfare entitlement.
“The entire system is akin to a witch hunt against anyone who is flagged for social benefits fraud investigations,” said David Nolan, senior investigative researcher at Amnesty Tech. “One of the main issues with AI [artificial intelligence] systems being deployed by social security agencies is that they can aggravate pre-existing inequalities and discrimination. Once an individual is flagged, they’re treated with suspicion …
Related
Discover more from 25finz, L.L.C
Subscribe to get the latest posts sent to your email.
No comments yet. Be the first to comment!
You must be logged in to post a comment. Log in