1D54D74C4B788B01A39CE8E6899019C7 Undertones: If on welfare better learn Dutch and be a man -->
U3F1ZWV6ZTQxMzQwNzAyOTQ1X0FjdGl2YXRpb240NjgzMzMxMjYxMjY=
recent
أخبار ساخنة

Undertones: If on welfare better learn Dutch and be a man

Algorithms in the Netherlands show pervasive structural inequalities

Originally published on Global Voices

Illustration by Global Voices.

This story is part of Undertones, Global Voices’ Civic Media Observatory‘s newsletter. Subscribe to Undertones here.

Welcome to Undertones, the newsletter where we decode narratives. In this edition, we cover how people speak about the (mis)use of AI in the Netherlands, thanks to researcher Leon Ingelse. 

Data scientist Ingelse analyzed the discussions about algorithms in the Netherlands. After all, it is a big deal in the Low Lands: in 2021, an entire government fell after years of mishandling a case of AI fraud detection among childcare beneficiaries. Thousands of families, primarily of migrant backgrounds, had been wrongly accused of cheating the system. The Netherlands, a country that takes pride in efficiency and the use of technology in government services, is a case study of how the misuse of AI can hurt the lives of individuals and undermine trust in authorities.

Nevertheless, algorithms continue to be used in municipalities throughout the Netherlands to determine who is more or less likely to commit welfare fraud — an endless hot topic in Europe. However, nearly all of this AI tech is opaque to journalists and researchers as authorities claim privacy and intellectual property concerns. Only Rotterdam, the second-largest Dutch city, recently agreed to share the skeleton of their algorithm with journalists from investigative media Lighthouse Reports and Follow The Money. They accidentally also shared the training data sets, which provided invaluable insight into how the algorithm works from the inside out.

The journalists have found that the algorithm is a scoring card fed with personal criteria such as age, sex, and marital status, as well as elements of behavior and language skills. Rotterdam initially developed its algorithm with the help of consulting firm Accenture.

“The data fed into the algorithm ranges from invasive (the length of someone’s last romantic relationship) and subjective (someone’s ability to convince and influence others) to banal (how many times someone has emailed the city) and seemingly irrelevant (whether someone plays sports),” Lighthouse writes. 

In other words, if you are a) a woman, b) plurilingual, c) twenty or thirty-something, and d) a parent, the algorithm would file you as someone at risk of committing fraud, independently of any actual fraudulent behavior in the past. If you appear shy to the social worker, you will be even worse off. You will not know whether you’re on the blacklist, or why. Still, uncomfortable investigations by local authorities would ensue, as well as potentially losing welfare benefits.

Screenshot of the risk scores calculated by Rotterdam's welfare fraud algorithm, ask reconstructed by Lighthouse Reports.

“The Dutch tend to believe they don't have structural racism, whereas this algorithm clearly shows otherwise, as people with limited Dutch proficiency are discriminated against,” Ingelse says. Other discriminatory algorithm uses in the Netherlands have also come to light, with the profiling of nationality and ethnicity in visa applications.

Algorithmic bias is not new

Journalists have pointed out the intrinsic biases in algorithmic risk scores elsewhere. As early as 2016, an investigation by ProPublica revealed that within the U.S. criminal justice system, Black individuals were twice as likely to be erroneously identified as “high risk” for reoffending; in contrast, white individuals were twice as likely to be wrongly categorized as “low risk.” In the Rotterdam case, experts argue that AI performance is only marginally better than randomly selecting welfare beneficiaries for investigation, according to Lighthouse.

AI algorithms exhibit bias due to training on skewed or insufficient real-world data and the influence of human developers’ inherent prejudice. Biases enter the system in other ways too.

How are people perceiving these revelations?

In the Dutch AI scandals, the public debate does not tend to happen via original tweets or posts, but rather only on comment sections, which, surprisingly, show an interesting array of opinions. Most of those launching the debate are journalists.

“It could be that the Dutch are tired after the Toeslagenaffaire [childcare benefits scandal], and are more focused on ChatGPT,” Ingelse says. “Also, public figures that have been part of the scandals tactically avoid answering questions about their involvement.” He adds that the only people burdened with these specific algorithms are minorities, who are often invisibilized. “Notice how these fraud algorithms are never used in profit taxes, for example, which would target the rich,” Ingelse says.

We have reached out to local Rotterdam authorities as well as the newly created group of families impacted by the algorithm to understand their perspectives, but have not received answers.

Narrative 1: Algorithms that help fight crime should be implemented even if biased

This narrative in a nutshell: “The algorithm is doing its job”

This narrative implies that the algorithm works, even if flawed, in fighting fraud. This discourse has been around for years. For example, at the height of the childcare benefits scandal in 2019, far-right blogger Pim Beaart, alias “Hannibal,” argued in an editorial piece that politicians only “want to use algorithms when they validate their political opinions.”

More recently, one of the largest conservative Dutch opinion magazines, EW, claimed that automated decisions in visa applications should be the norm, despite always targeting people from certain nationalities such as former Dutch colony Suriname. On Facebook, they ask: “Shouldn't the government just use digital programs that dare to filter [visa applicants] in cold blood?” Others, such as the famous media blog GeenStijl also argue in favor of efficiency, one of the most popular overarching narratives in the Netherlands.

“The Dutch consider themselves to favor efficiency above anything else,” Ingelse says. The Netherlands is leading the way in implementing digital IDs, incorporating technology in education and healthcare, and advancing governmental digitalization. Whatever they do — and how they deal with algorithmic biases — might influence how other countries develop their own AI. However, not many understand the basics of algorithms.

“Besides the expected presence of xenophobic comments and racism, this narrative will probably always be present as long as people don't understand how algorithms work and the difference between objective data and the interpretation of that data,” Ingelse says. Correlation does not mean causation.

Narrative 2: “Dutch authorities are harming their citizens by using discriminatory algorithms

This narrative in a nutshell: “Stop using discriminatory AI”

In tweets and in comments, people claim that the Netherlands has a history of discriminatory algorithms that re-victimize vulnerable populations. Focusing on someone’s poor Dutch skills, is, for many, a subtext for targeting immigrants.

Some go further by stating that “Algorithms should be open and transparent to ensure they are not biased.” This is not the case for the vast majority of algorithms out there. So, while people criticized Rotterdam’s algorithmic use, many also lauded the city for at least sharing their algorithm with the journalists.

Most of these posts and comments come from people defending human rights, not the people directly impacted by the algorithms themselves. That is perhaps precisely because vulnerable populations are less visible online, or less vocal (in Dutch) online. However, when directly interviewed, people are very opinionated against the use of discriminatory AI. Dutch outlet Vers Breton went to Rotterdam’s market, where people with less economic means go, and interviewed shoppers — and they all criticized the use of algorithms.

Dutch legislators are also pushing for more AI regulation. “The central problem with AI development is that it's entirely market-driven: it's a multi-billion dollar industry with no rules,” Kim van Sparrentak, a member of the European Parliament with the Greens/EFA group writes in an extensive Twitter thread.

Upcoming EU Artificial Intelligence Act

The European Union is negotiating an “AI Act” intended to regulate this kind of situation. In mid-June, the European Parliament sent its draft to the European Council, where it is currently being discussed. As it stands at the time of writing, one of the points of the legislation is to prohibit “social scoring” in order to avoid discrimination.

This newsletter is published through the Community CMO. If you want to learn more or be a part of it, consider pitching us or join our next live methodology training on July 7 at 12PM UTC.


No comments
Post a Comment

Post a Comment

NameEmailMessage