OASI Register (full version)
Report abuse
Use this data
Sign up for free
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
Drag to adjust the number of frozen columns
Algorithm
Implemented by
Location
Developed by
Domain
Aim
Used in public administration / private sector
Social impact
Has it been audited?
Implemented since (yyyy-mm-dd)
Implemented until (yyyy-mm-dd) (only if no longer in use)
Jurisprudence
Adoption stage
Additional notes
Links and sources
Entry last updated on (yyyy-mm-dd)
GALE-SHAPLEY | Processing high school applicants
New York City Department of Education
New York City (US)
United States (US)
N/A
education and training
profiling and ranking people
automating tasks
public administration
socioeconomic discrimination

Yes, there was a study from the NYC Independent Budget Office (NYC Independent Budget Office, 2016)

2003-XX-XX
N/A

N/A

not known

The New York City Department of Education has deployed algorithms in order to process student admissions into New York City’s public high school system. The system in place utilizes a multipurpose algorithm (called the ‘Gale-Shapley’ algorithm) that matches students with schools after evaluating a variety of criteria (Herold 2013). It processes a set of information from students and their parents, including a rank-ordered list of schools they prefer, and then institutional data, like certain qualities about each school and admissions rules (ibid). Various questions concerning the system have been raised. Firstly, on account of the algorithm’s metrics, students have a rather low probability to be admitted into one of their preferred public high schools in the city (Abrams 2017). Secondly, researchers have found evidence that the algorithm disproportionately matches lower-income students with lower-performing schools (Nathanson, Corcoran and Baker-Smith 2013). And, despite parents being

  • Herold, B. (2017). ‘Open Algorithms’ Bill Would Jolt New York City Schools, Public Agencies. [Blog] Education Week. Available at: http://blogs.edweek.org/edweek/DigitalEducation/2017/11/open_algorithms_bill_schools.html [Accessed 4 Sep. 2018].
  • Abrams, S. (2017). The Real Data for NYC High School Admissions | Teachers College Columbia University. [online] Teachers College – Columbia University. Available at: https://ncspe.tc.columbia.edu/center-news/the-real-data-for-nyc-high-school-admissions/ [Accessed 4 Sep. 2018].
2023-02-17
Algorithm to predict crime
Chicago Police Department
Chicago (US)
University of Chicago
policing and security
predicting human behaviour
public administration
racial discrimination

N/A

2017-XX-XX
N/A

N/A

active use

The University of Chicago developed a predictive policing algorithm which divides cities into 100 square-foot tiles and uses place based predictions to determine the likelihood of crime occurring (Andrew Guthrie Ferguson, 2017). The algorithm uses historical crime data to detect patterns in two particular areas of crime: violent crimes and property crimes. These categories were identified due to a high likelihood of reporting and are less prone to enforcement bias (Lau, 2020). There are concerns about the algorithm, in particular a propensity to re-criminalize former convicts along lines which have historically been shown to be racially biased(York, 2022).

  • Andrew Guthrie Ferguson. (2017, October 3). The Police Are Using Computer Algorithms to Tell If You’re a Threat. Time; Time. https://time.com/4966125/police-departments-algorithms-chicago/
  • Lau, T. (2020, April 1). Predictive Policing Explained. Brennan Center for Justice. https://www.brennancenter.org/our-work/research-reports/predictive-policing-explained
2023-02-10
Algorithm to determine mortgage prices
N/A
United States (US)
Freddie Mac and Fannie Mae
infrastructure
evaluating human behaviour
making personalised recommendations
private sector
socioeconomic discrimination
racial discrimination

N/A

N/A
N/A

N/A

active use

Mortgage payments are increasingly being determined by algorithms. Freddie Mac and Fannie May are loan guarantors who require lenders to use a specific credit scoring algorithm which determines if an applicant has reached the minimum credit threshold for approval. The algorithm was created in 2005 and used data going back to the 1990s. The algorithm is considered detrimental to people of color due to systemic discrimination against them when it comes to lending practices.

  • Daniels, C. (2022). The financial sector is adopting AI to reduce bias and make smarter, more equitable loan decisions. But the sector needs to be aware of the pitfalls for it to work. Business Insider. https://www.businessinsider.com/how-the-finance-sector-uses-ai-to-make-loan-decisions-2022-8?r=US&IR=T
  • Martinez, E., & Kirchner, L. (2021, August 25). The Secret Bias Hidden in Mortgage-Approval Algorithms – The Markup. Themarkup.org. https://themarkup.org/denied/2021/08/25/the-secret-bias-hidden-in-mortgage-approval-algorithms
2023-02-10
Algorithm to match vulnerable people with available resources
Los Angeles municipality (US)
Los Angeles (US)
N/A
infrastructure
automating tasks
public administration
socioeconomic discrimination
state surveillance

N/A

N/A
N/A

N/A

active use

Los Angeles County employs an algorithm to address the mismatch between housing supply and demand. The system is based on two principles: prioritization and housing first. The system is intended to put individuals and families into apartments as quickly as possible and thereafter offers supportive services. The system collects personal information such as birth dates, demographic information, immigration and residency status and birth dates. The information is used to rank a person on a scale from 1-17 with 1 being low risk and 17 being high risk. Those scoring between 0 and 3 are judged to need no housing intervention. Those scoring between 4 and 7 qualify to be assessed for limited-term rental subsidies and some case management services —an intervention strategy called rapid re-housing. Those scoring 8 and above qualify to be assessed for permanent supportive housing. Though it is beneficial, there are several concerns around the extensive data collected and the possible surveillance

  • Aguma, J. (2022). A Matching Mechanism for Provision of Housing to the Marginalized. https://arxiv.org/pdf/2203.01477.pdf
  • Bowen, E. (2018). “Automating Inequality”: Algorithms In Public Services Often Fail The Most Vulnerable. NPR.org. https://www.npr.org/sections/alltechconsidered/2018/02/19/586387119/automating-inequality-algorithms-in-public-services-often-fail-the-most-vulnerab
2023-02-10
Algorithm to profile unemployed people
Polish Ministry of Labour and Social Policy
Poland
N/A
labour and employment
social services
profiling and ranking people
public administration
threat to privacy

Yes. In 2018 the Polish Constitutional Tribunal decided that the system needed to be better secured in a legislative act.

2014-XX-XX
N/A

Yes. In 2018 the Polish Constitutional Tribunal decided that the system needed to be better secured in a legislative act.

not known

The Polish Ministry of Labor and Social Policy has deployed an algorithm that profiles unemployed citizens and categorizes them according to types of welfare assistance (Algorithm Watch 2019). The system asks 24 questions that determine two criteria: “distance from the labour market” and “readiness to enter or return to the labour market” (ibid). The system has been criticized for its opaque operations regarding the availability of public services and its arbitrary nature of decision-making due to the simplification of the person’s case interview. Moreover, the algorithm does not give its subjects access to see its internal operations nor an explanation regarding the decision that has been made about them (ibid). The system also acts almost entirely unilaterally; The Polish Ministry of Labor and Social Policy has limited oversight over the automated process. In total, the algorithm’s lack of human intervention and supervision violates the legal principles enshrined in the EU’s GDPR.

  • AW AlgorithmWatch gGmbH (2019): “Automating Society: Taking Stock of Automated Decision-Making in the EU”. Open Society Foundation, p.108.
2023-01-17
Algorithm to identify spending patterns
American Express
N/A
N/A
business and commerce
profiling and ranking people
predicting human behaviour
private sector
threat to privacy
socioeconomic discrimination

N/A

2009-XX-XX
N/A

N/A

not known

In 2009, during the fallout of the financial crisis, American Express implemented algorithms to identify the spending patterns of customers who had trouble paying their bills and reduce the financial risk these customers posed to the company by limiting their credit score (O’Neil 2018). As a result, shopping in certain establishments could lower a person’s credit score. However, often these specific establishments were those frequented by persons who already possessed financial difficulties, thus persons in lower-income brackets were discriminated against by the algorithm.

Those affected saw their credit scores go down and, therefore, their borrowing costs go up. This, in combination with a limited ability to access credit, created a dire financial situation for many people. The algorithm effectively punished those who were already struggling with yet more financial problems (Lieber 2009).

  • Lieber, R. (2009). American Express Kept a (Very) Watchful Eye on Charges. The New York Times. [online] Available at: https://www.nytimes.com/2009/01/31/your-money/credit-and-debit-cards/31money.html [Accessed 7 Jun. 2019].
  • O’Neil, C. (2018). Weapons of math destruction. [London]: Penguin Books.
2023-01-16
Algorithm to find cases of trading cash for food stamps.
US Department of agriculture
United States (US)
US Department of Agriculture
business and commerce
evaluating human behaviour
public administration
socioeconomic discrimination

N/A

N/A
N/A

N/A

active use

The US department of Agriculture (USDA) uses an algorithm to screen uses of government assistance for fraud. The system tracks usage of the benefits and, where a misuse is flagged, trigger a suspicious activity flag that leads to issuance of a charge letter who the person accused of misusing the benefits (Kim, 2007). After a charge letter is received, a retailer can file a counter within 10 days clarifying proper use of the benefits system or flaws in the calculations made by the algorithm. Despite this, in 2015, 2016 and 2017 all the cases on benefits disqualifications in five states resulted in permanent disqualification. The operation of the algorithm negatively impacts retailers in low income neighbourhoods as they often give credit to customers who may not be able to afford food. Lending credit to those receiving benefits is considered a red flag in the system so these retailers are shut down which further harms the community (Brown, 2018).

  • Brown, C. (2018, October 8). How an algorithm kicks small businesses out of the food stamps program on dubious fraud charges. The Counter. https://thecounter.org/usda-algorithm-food-stamp-snap-fraud-small-businesses/
  • Contributor, C.-S. G., February 16, P. C.-S., & 2020. (2020, February 16). AI algorithms intended to root out welfare fraud often end up punishing the poor instead | Analysis. Pennsylvania Capital-Star. https://www.penncapital-star.com/commentary/ai-algorithms-intended-to-root-out-welfare-fraud-often-end-up-punishing-the-poor-instead-analysis/
2023-01-16
Algorithm to determine tax fraud
Italian Revenue Agency
Italy
Italian Revenue Agency
business and commerce
automating tasks
public administration
state surveillance

N/A

N/A
N/A

N/A

active use

In 2022, the Ministry of Economy and Finance authorized the Italian Revenue Agency to launch an algorithm that determines taskpayers at risk of not paying their taxes (Spaggiari, 2022). The algorithm compares tax filings, earnings, property records, bank statements and electronic payments and looks for discrepancies then sends out a letter to affected taxpayers alerting them to the discrepancies and requesting an explanation (Brancolini, 2022).

The algorithm's use has raised flags about its surveillance capabilities as it increases surveillance on its citizens with more and more algorithms. Data protection policies require that the owner authorizes access to personal information therefore there is a balance to be struck on the privacy concerns relating to citizens' data (Group, 2023).

  • Albarea, A., Bernasconi, M., Marenzi, A., & Rizzi, D. (2019). Income Underreporting and Tax Evasion in Italy: Estimates and Distributional Effects. Review of Income and Wealth. https://doi.org/10.1111/roiw.12444
  • Brancolini, J. (2022). Italy Turns to AI to Find Taxes in Cash-First, Evasive Culture. News.bloombergtax.com. https://news.bloombergtax.com/daily-tax-report-international/italy-turns-to-ai-to-find-taxes-in-cash-first-evasive-culture
2023-01-10
Predictim Algorithm to profile babysitters
N/A
United States (US)
Skydeck accelerator
business and commerce
profiling and ranking people
private sector
disseminating misinformation
other kinds of discrimination

N/A

2018-XX-XX
N/A

N/A

active use

Predictim is an algorithm that generates a risk report on a person on the basis of combing through their social media posts. The algorithm was developed by the University of California at Berkeley's Skydeck accelerator. The service is specific to assessing babysitters and gives a risk score out of five to assist parents and caretakers to assess their employability(Thubron, 2018).

The algorithm uses Natural Language Processing and Computer vision algorithms and combs through Facebook (now Meta), Instagram and twitter posts to flag language that may indicate an inability to take care of children or be a responsible guardian(Wiggers, 2018). The system has been flagged for several concerns, primarily inaccuracy and illegal data scraping. In the first place persons who have assessed babysitters through the system have found dangerous risk scores that do not correlate to reality(Lee, 2018).

  • Harwell, D. (2018, November 23). Wanted: The “perfect babysitter.” Must pass AI scan for respect and attitude.. The Washington Post. https://www.washingtonpost.com/technology/2018/11/16/wanted-perfect-babysitter-must-pass-ai-scan-respect-attitude/
  • Lee, D. (2018, November 27). Predictim babysitter app: Facebook and Twitter take action. BBC News. https://www.bbc.com/news/technology-46354276
2023-01-10
PARCOURSUP | Screening of university applicants
France's Ministry of Education
France
N/A
education and training
evaluating human behaviour
profiling and ranking people
public administration
threat to privacy
socioeconomic discrimination

N/A

2018-XX-XX
N/A

N/A

not known

A French algorithmic admission tool ‘Parcoursup’ made headlines in 2018 when it denied many prospective students from French public universities (AlgorithmWatch 2019). Reports indicate that the Parcoursup system advised French universities to reject and admit students based on opaque selection criteria (ibid). The French government failed to notify users that their rejection was the product of algorithmic decision-making but it did publish the source code behind the matching selection. The software allegedly produces its decisions from processing personal data and applicant information (ibid). Citizens, outraged, claimed that the algorithm violated their right to enroll in a university once having obtained a high-school diploma.

  • AW AlgorithmWatch gGmbH (2019): “Automating Society: Taking Stock of Automated Decision-Making in the EU”. Open Society Foundation, p.72
2023-01-02
Algorithm to predict potential child abuse and neglect
Allegheny County Government
Allegheny County (US)
United States (US)
N/A
social services
profiling and ranking people
predicting human behaviour
evaluating human behaviour
public administration
socioeconomic discrimination
racial discrimination

N/A

2016-08-XX
N/A

N/A

not known

In Allegheny County, Pennsylvania, software containing a statistical model has been designed to predict potential child abuse and neglect by assigning a risk score to every case (Hurley 2018). This score ranges from 0 (lowest) to 20 (highest) and is the result of data processing from eight databases. The initiative involves various agencies including jails, public-welfare services, and psychiatric and drug treatment centers. The function of the score is to help social workers determine if an investigation should be carried out during the case assessment. The system hopes to save lives by alerting caseworkers to the most serious cases and allocating available resources in a manner that prioritizes these high-risk cases. In absence of the system, human decision-making had been found responsible for allotting the system’s resources quite inefficiently, admitting 48% of the high-risk families and excluding 27% of the high-risk families (ibid).

The algorithm has a mixed social impact. The s

  • Hurley, D. (2018): “Can an Algorithm Tell When Kids Are in Danger?”. In New York Times Magazine. [online] Available at: https://www.nytimes.com/2018/01/02/magazine/can-an-algorithm-tell-when-kids-are-in-danger.html [Accessed 18 Oct. 2018].
  • Edes, A. and Bowman, E. (2018): “Automating Inequality: Algorithms”. In Public Services Often Fail The Most Vulnerable. [online] In NPR.org. Available at: https://www.npr.org/sections/alltechconsidered/2018/02/19/586387119/automating-inequality-algorithms-in-public-services-often-fail-the-most-vulnerab?t=1540978826567 [Accessed 31 Oct. 2018].
2023-01-02
Algorithm to determine credit risk
N/A
United States (US)
SoFi
business and commerce
evaluating human behaviour
private sector
age discrimination

N/A

2011-XX-XX
N/A

N/A

active use

Short for Social Finance, SoFi is a fintech company that uses AI and Big data to aggregate a number of factors to determine a person's creditworthiness (Abkarians, 2018). The firm primarily gives loans to students in the US using FICO scores and its model which projects a students' likely future ability to repay. The algorithm uses data such as mortgage refinancing and social media posts(Agrawal, 2017).

On one hand, using proxies to determine creditworthiness can open up new channels of finance particularly for students. On the other hand, it has been noted in several studies that using proxies is not an accurate or effective way of determining ability to repay and often is prejudicial to anyone who does not fit the 'proxy mold'(Bruckner, 2018).

  • Abkarians. (2018). Giving Credit Where its Due: Machine Learning’s Role in Lending. Technology and Operations Management. https://d3.harvard.edu/platform-rctom/submission/giving-credit-where-its-due-machine-learnings-role-in-lending/
  • Agrawal, V. (2017, December 20). How Alternative Lenders are Using Big Data and AI to Revolutionise Lending. Datafloq. https://datafloq.com/read/alternative-lenders-using-big-data-ai-lending/
2022-12-30
Algorithm to determine test score pricing
The Princeton Review
United States (US)
The Princeton Review
education and training
predicting human behaviour
profiling and ranking people
private sector
racial discrimination

N/A

N/A
N/A

N/A

active use

The Princeton Review, an educational company, offers an online tutoring service and charges between $6,600 and $8,400 for its packages. In order to purchase a package, a customer must enter a zip code(Miller & Hosanagar, 2019). A ProPublica analysis found that ZIP codes with a high median income or a large Asian population were more likely to be quoted the highest prices. Customers in predominantly Asian neighborhoods were nearly twice as likely to see higher prices compared to the general population, even if it was a low-income area (Larson, 2015).

When queried, the Princeton Review replied that its prices are simply determined by geographic region with no intended effect on race. This poses several problems as geographic distribution is highly racialised and a negative effect can be seen by certain populations due to a number of historical and socio-cultural factors that are not their fault(Jr, 2016).

  • Jr, J. A., Surya Mattu,Terry Parris. (2016). When Algorithms Decide What You Pay. ProPublica. https://www.propublica.org/article/breaking-the-black-box-when-algorithms-decide-what-you-pay
  • Larson, J. A., Surya Mattu, Jeff. (2015, September 3). Asian Students Are Charged More for Test Prep. The Atlantic. https://www.theatlantic.com/education/archive/2015/09/princeton-review-expensive-asian-students/403510/
2022-12-29
MVA Algorithm to determine property value
City of Detroit (US)
Detroit (US)
N/A
infrastructure
automating tasks
public administration
socioeconomic discrimination
racial discrimination

N/A

N/A
N/A

N/A

active use

Over 25 cities use a tool called the Market Value Analysis algorithm to classify and map neighborhoods by market strength and investment value. Cities use MVA maps to craft tailored urban development plans for each type of neighborhood. These plans determine which neighborhoods receive housing subsidies, tax breaks, upgraded transit or greater code enforcement. Cities using the MVA are encouraged by its developer to prioritize investments and public subsidies first in stronger markets before investing in weaker, distressed areas as a way to maximize the return on investment for public development dollars.

In Detroit, city officials used the MVA to justify the reduction and disconnection of water and sewage utilities, as well as the withholding of federal, state and local redevelopment dollars in Detroit’s “weak markets,” which happened to be its Blackest and poorest neighborhoods. The recommendations from Indianapolis’ MVA meant small business support, home repair and rehabilitation,

  • Cytron, N., Pettit, K. L. S., G Thomas Kingsley, Erickson, D., Seidman, E. S., Urban Institute, & Francisco. (2014). What counts : harnessing data for America’s communities. Federal Reserve Bank Of San Francisco.
  • Le, V. (2021, March 31). Algorithmic Redlining Is Real. Why Not Algorithmic Greenlining? Governing. https://www.governing.com/community/algorithmic-redlining-is-real-why-not-algorithmic-greenlining.html
2022-12-29
ZipRecruiter Algorithm to assist in employee recruitment
N/A
worldwide
Ziprecruiter
business and commerce
evaluating human behaviour
automating tasks
private sector
racial discrimination
socioeconomic discrimination

N/A

2010-XX-XX
N/A

N/A

active use

ZipRecruiter is essentially an online job board with a range of personalized features for both employers and jobseekers. ZipRecruiter is a quintessential example of a recommender system, a tool that, like Netflix and Amazon, predicts user preferences in order to rank and filter information—in this case, jobs and job candidates. Such systems commonly rely on two methods to shape their personalized recommendations: content-based filtering and collaborative filtering. Content-based filtering examines what users seem interested in, based on clicks and other actions, and then shows them similar things. Collaborative filtering, meanwhile, aims to predict what someone is interested in by looking at what people like her appear to be interested in.

Personalized job boards like ZipRecruiter aim to automatically learn recruiters’ preferences and use those predictions to solicit similar applicants. Like Facebook, such recommendation systems are purpose-built to find and replicate patterns in user

  • Bogen, M. (2019, May 6). All the Ways Hiring Algorithms Can Introduce Bias. Retrieved from Harvard Business Review website: https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introduce-bias
  • Kim, P. T. (2020). MANIPULATING OPPORTUNITY. Virginia Law Review, 106(4), 867–935. Retrieved from https://www.jstor.org/stable/27074709
2022-12-27
AURA Algorithm to predict risk of child mistreatment
Los Angeles Department of Children and Family Services
Los Angeles (US)
SAS Institute
social services
evaluating human behaviour
predicting human behaviour
public administration
state surveillance
socioeconomic discrimination

N/A

2008-XX-XX
2019-XX-XX

N/A

no longer in use

In 1999, Los Angeles County began using a decision making model, Structured Decision Making (SDM) to assess the risk of children being mistreated (Teixeira & Boyas, 2017). Concerns on the accuracy and effectiveness of the tool led to a review and report by the Los Angeles County Blue Ribbon Commission on Child Protection which included the adoption of a predictive analytics tool (Slattery, 2015). The tool, named the Approach to Understanding Risk Assessment (AURA) which would utilise risk factors from previous cases to determine risk of mistreatment in future cases (Nash, 2022).

The tool would hopefully streamline child protection efforts and allow a better use of resources however several problems cropped up, beginning with the SBM. The accuracy of the SBM was called into question meaning the majority of data that AURA was modelled on was found to be flawed (Slattery, 2015). Secondly, manipulation of the system by knowing parents or caretakers as well as social workers meant that the

  • Branson, Hailey. “L.A. County Supervisors Call for Study of Child Risk Assessment Tool after Boy Is Found Dead in Closet.” Los Angeles Times, 20 Sept. 2016, www.latimes.com/local/lanow/la-me-ln-supervisors-yonatan-aguilar-risk-20160920-snap-story.html. Accessed 27 Dec. 2022.
  • Nash, Michael. “Report on Risk Assessment Tools, SDM and PredictiveAnalytics.” Lacounty.gov, 2022, file.lacounty.gov/SDSInter/bos/bc/1023048_05.04.17OCPReportonRiskAssessmentTools_SDMandPredictiveAnalytics_.pdf. Accessed 27 Dec. 2022.
2022-12-27
Algorithm to identify child mistreatment
New Zealand Ministry of Social Development
New Zealand (NZ)
New Zealand Ministry of Police Department
social services
predicting human behaviour
public administration
threat to privacy
other kinds of discrimination

N/A

N/A
N/A

N/A

not implemented

The New Zealand Ministry of Social Development, in 2016, commissioned a research team to explore a predictive modelling tool which would help prevent adverse events for children younger than 5 years old. The pilot tool, was intended to be modelled after Allegheny County's screening tool and would look at data tracking a family's contact with public systems assigning a risk score to around 60,000 children born in New Zealand in 2015 (Loudenback, 2015).

The tool would utlilise 132 data points that the government has on file about a child's caregivers including the caregiver's age and whether the parent is single (Jackson, 2016). Then opposition party spokesperson Jacinta Arden called on the officials to immediately stop the study with UNICEF New Zealand's advocacy manager stating that assigning risk scores to newborns and waiting to see if they were abused would be a gross breach of human rights and would attach a layer of objectivity to a process that required more human than artificia

  • Dare, T. (2015, August 3). Anne Tolley’s “lab rats” call inflammatory political rhetoric. Retrieved December 27, 2022, from Stuff website: https://www.stuff.co.nz/national/politics/opinion/70773764/anne-tolleys-lab-rats-call-inflammatory-political-rhetoric
  • Jackson, K. (2016). Predictive Analytics in Child Welfare — Benefits and Challenges - Social Work Today Magazine. Retrieved December 27, 2022, from www.socialworktoday.com website: https://www.socialworktoday.com/archive/MA18p10.shtml
2022-12-27
International Baccalaureate grade-predicting algorithm
United Kingdom
United Kingdom (UK)
N/A
education and training
profiling and ranking people
public administration
other kinds of discrimination

N/A

N/A
N/A

N/A

no longer in use

The IB’s algorithm rendered judgment using data that mostly relied on contextual factors outside of students’ own performance. Its reliance on schools’ historical data — using trends of past students to guess how a single student might have achieved — risked being unfair to atypical students, such as high performers in historically low-performing schools.

Because 94 schools new to the IB lacked sufficient data, their students’ grades were adjusted to fit the past performance of students from other schools, who may have had different circumstances. And the IB’s “active” use of teacher-predicted grades was puzzling, absent an explanation of how it would mitigate acknowledged systemic inaccuracies.

  • Allan, Carli. IB Results 2020: New Coursework-Based Grades. 17 Aug. 2020, whichschooladvisor.com/uk/school-news/ib-results-2020-new-grading-system.
  • Asher-Schapiro, Avi. “EXPLAINER The IB-Exam Grading Algorithms amid Coronavirus: What’s the Row about?” Edited by Katy Migiro, Reuters, Thomson Reuters Foundation, 12 Aug. 2020, www.reuters.com/article/global-tech-education/explainer-exam-grading-algorithms-amid-coronavirus-whats-the-row-about-idUSL8N2FD0EI.
2022-12-16
AI to track suicide risk
N/A
worldwide
Touchkin
health
predicting human behaviour
private sector
threat to privacy

N/A

2019-XX-XX
N/A

N/A

active use

Many AI efforts within mental healthcare have been directed toward suicide. At the most basic level, ML algorithms can identity suicide risk factors, not just in isolation, but also while integrating complex interactions across variables. ML techniques applied to well-characterized datasets have identified associations between suicide risk and clinical factors such as current depression, past psychiatric diagnosis, substance use and treatment history (Jenkins et al., 2014; Passos et al., 2016), with additional analytics highlighting environmental effects (Bonner and Rich, 1990; Fernandez-Arteaga et al., 2016). Multi-level modeling of Internet search patterns has also been used to identify risk factors of interest (Song et al., 2016), with Google data analytics providing a better estimate of suicide risk than traditional self-report scales in some cases (Ma-Kellams et al., 2016).

Once identified through exploratory analyses, risk factors can be used to inform prediction models. Predicti

2022-12-16
Algorithm to determine risk of travelers at borders
FRONTEX
European Union (EU)
FRONTEX
policing and security
profiling and ranking people
public administration
other kinds of discrimination
racial discrimination

n/a

n/a
N/A

N/A

being tested

The European Travel Information and Authorization System (ETIAS) has for example been adopted to "provide a travel authorization for third-country nationals exempt from the visa requirement enabling consideration of whether their presence on the territory of the Member States does not pose or will not pose a security, illegal immigration or a high epidemic risk” (Recital 9 of the Regulation (EU) 2018/1240 of 12 September 2018 establishing a European Travel Information and Authorization System (ETIAS).

The ETIAS regulation provides for an algorithm which is a pre travel assessment of whether a visa exempt TCN has raised any security concerns or public concerns due to their movement across the border. The algorithm has raised questions as many of the variables in its calculation may be determined to be discriminatory.

  • AlgorithmWatch. (2022). Visa-free travelers to the EU will undergo “risk” checks from 2023. Who counts as risky remains unclear. AlgorithmWatch. https://algorithmwatch.org/en/visa-free-travelers-risk-checks/
  • ERNST, C. (2022). Parliamentary question | ETIAS screening rules algorithm | E-003185/2022 | European Parliament. Europa.eu. https://www.europarl.europa.eu/doceo/document/E-9-2022-003185_EN.html
2022-12-14
DELIA | Algorithm to predict location and perpetrator of crime
N/A
Italy
Italian Police
policing and security
predicting human behaviour
evaluating human behaviour
public administration
racial discrimination
state surveillance

N/A

2008-XX-XX
N/A

N/A

active use

Delia (the Dynamic Evolving Learning Integrated Algorithm system) is a predictive policing system operating in Italy. The system was developed by Italian police in conjunction with KeyCrime in order to act as a crime 'forecast' by location and potential offender. The software analyses up to 1.5 million variables and assesses four criminogenic elements: type of crime, objective, modus operandi and the psycho-physical characteristics of the perpetrator (including tattoos, piercings and clothing).

The algorithm operates on the basis that hotspot analysis can lead to a higher rate of solving and predicting crime. The algorithm makes correlations among the myriad data elements to link a crime with a potential criminal. The algorithm has been shown to increase the rate of cases resolved and in turn, reduce police resources and educe overall crime.

  • Fair Trials. (2021). Fair Trials calls for ban on the use of AI to “predict” criminal behaviour. Fair Trials. https://www.fairtrials.org/articles/news/fair-trials-calls-ban-use-ai-predict-criminal-behaviour/
  • Gatti, C. (2022). Monitoring the monitors: a demystifying gaze at algorithmic prophecies in policing. Justice, Power and Resistance, 1–22. https://doi.org/10.1332/ubqa2752
2022-12-13
Algorithm to determine credit scores
N/A
United States (US)
Upstart
business and commerce
predicting human behaviour
profiling and ranking people
private sector
threat to privacy

N/A

N/A

N/A

active use

Upstart is a lending platform that uses artificial intelligence that is designed to improve access to affordable credit to those who might be a certain demographic that traditional ways would have made it harder for these individuals to get a loan approved. Traditional ways are still being used because we are comfortable with looking at the length of someone’s credit history and making the decision if they are qualified to get an amount of money from a bank or a lending company.

The company uses machine learning algorithms along with artificial intelligence to take several factors into consideration when deciding how creditworthy a potential borrower is. It considers components such as one’s specific occupation, employer, education, or current institution where they may still be in school. In broadening the scope of elements that lenders can use, borrowers can be approved for loans that they may not have been approved for under traditional FICO considerations.

  • Chan, L., Hogaboam, L., & Cao, R. (2022). Artificial Intelligence in Credit, Lending, and Mortgage. Applied Innovation and Technology Management, 201–211. https://doi.org/10.1007/978-3-031-05740-3_13
  • Di Maggio, M., Ratnadiwakara, D., & Carmichael, D. (2022, March 1). Invisible Primes: Fintech Lending with Alternative Data. National Bureau of Economic Research. https://www.nber.org/papers/w29840
2022-12-12
SAS | Algorithm to determine teachers' effectiveness
N/A
United States (US)
SAS-EVAAS
education and training
evaluating human behaviour
public administration
weakening of democratic practices

N/A

2003-XX-XX
N/A

The Houston Federation of Teachers filed a suit against the Houston Independent School disctrict, challenging the use of the SAS. The lawsuit was the m

active use

The SAS system evaluates teachers based on students' test scores. These algorithms typically use data on student performance, such as test scores and grades, to calculate a score that reflects the teacher's effectiveness(Collins, 2014). The exact method used to calculate the score can vary, but many of these algorithms use statistical techniques to control for factors such as student demographics and previous academic achievement. The goal of these algorithms is to provide a fair and objective measure of a teacher's performance that can be used to support professional development and identify areas for improvement(Collins, 2014). The algorithm is also intended to hold teachers accountable and provide positive and negative incentives to improve their instruction(Amrein-Beardsley & Geiger, 2020).

The US Department of Education introduced a program that provided grants to schools depending on their achievement of certain metrics and the evaluations provided by the SAS(The Secret Algorith

  • Amrein-Beardsley, A., & Geiger, T. (2020). Methodological Concerns About the Education Value-Added Assessment System (EVAAS): Validity, Reliability, and Bias. SAGE Open, 10(2), 215824402092222. https://doi.org/10.1177/2158244020922224
  • Collins, C. (2014). Houston, We Have a Problem: Teachers Find No Value in the SAS Education Value-Added Assessment System (EVAAS®). Education Policy Analysis Archives. https://doi.org/10.14507/epaa.v22.1594
2022-12-08
Structured Assessment of Violence Risk in Youth (SAVRY)
N/A
United States (US)
N/A
policing and security
evaluating human behaviour
predicting human behaviour
public administration
socioeconomic discrimination

N/A

2003-XX-XX
N/A

In 2019, the Public Defender Service challenged the use of SAVRY in Court. The defendant was a juvenile who was charged with robbery and pleaded guilty and showed good conduct in the time leading up to the sentencing. Prior to sentencing, the SAVRY determined that the defendant was at a high risk for committing acts of violence which vitiated his probation and led to the defense lawyers challenging the use of the algorithm(Richardson et al., 2019).

The defense lawyer pointed out several flaws in the algorithm's functionality that prejudiced the defendant particularly where proxy data such as parental criminality and the number of police contact the juvenile had. The judge disallowed the use of SAVRY in the case but did not extend the ruling to other instances where SAVRY would be used(Richardson et al., 2019).

active use

SAVRY is a risk assessment tool for juvenile offenders(Vincent & Viljoen, 2020). The algorithm, developed in 2003 operates by using 24 risk factors and 6 protective factors(Richardson et al., 2019). The risk factors, divided into: historical (past supervision/intervention efforts etc.), individual and social/contextual (peer rejection, substance abuse difficulties) make up the score which results in a recommendation of 'low, 'moderate' or 'high' likelihood of violence by the juvenile offender(Miron et al., 2020).

Despite the algorithm's widespread use, concerns have been flagged around the algorithm's model(Chae, 2020). To begin with, the technique applied in SAVRY cannot be tested for falsity, the algorithm does not have a known error rate and furthermore, it enhances the likelihood that it will be misused when assessing low income children of color(Richardson et al., 2019). Additionally, a juvenile offender is likely to have a higher score if their parent has a criminal history whic

  • Chae, J. (2020, July 1). Hidden Algorithms, Bad Science, and Racial Discrimination in the D.C. Juvenile Justice System. Medium. https://justinhchae.medium.com/hidden-algorithms-bad-science-and-racial-discrimination-in-the-d-c-juvenile-justice-system-d8770185e071
  • Miron, M., Tolan, S., Gómez, E., & Castillo, C. (2020). Evaluating causes of algorithmic bias in juvenile criminal recidivism. Artificial Intelligence and Law. https://doi.org/10.1007/s10506-020-09268-y
2022-12-08
Algorithm to help social workers decide whether to investigate families for child abuse
Oregon Department of Human Services
Oregon (US)
Oregon Department of Human Services
social services
evaluating human behaviour
predicting human behaviour
public administration
threat to privacy
state surveillance
racial discrimination
socioeconomic discrimination

N/A

N/A
2022-XX-XX

N/A

no longer in use

Oregon's Department of Human Services (DHS) developed a screening tool aimed at predicting the risk that children face of winding up in foster care or being investigated in the future. The algorithm was modelled after the Allegheny Family Screening Tool which was widely criticized for its demonstrably biased outcomes(Johanson, 2022). The algorithm draws from internal child welfare data(Williams, 2020). The algorithm gives a risk score where the higher the risk score, the greater the chance the child is being neglected, which influences the decision to send out a caseworker(Johanson, 2022).

An investigation into the Allegheny algorithm began to raise flags about the accuracy and bias potential that similar algorithms posed(Burke, 2022). The model was found to have irregularly flagged a disproportionate number of black children for mandatory neglect investigations which not only caused racial bias but also took away vital resources from other children due to the presence of the bias(Wil

  • Burke, S. H. and G. (2022). Oregon weighs use of AI child-neglect algorithm critics say adds to racial, economic disparities. The Register-Guard. https://www.registerguard.com/story/news/2022/04/29/ai-algorithm-that-screens-for-child-neglect-raises-concerns-oregon-families-allegheny-county-pa-tool/65352644007/
  • Ho, S., & Burke, G. (2022, June 2). Oregon Dropping AI Tool Helps Decide Child Abuse Cases. Nationworldnews.com. https://nationworldnews.com/oregon-dropping-ai-tool-helps-decide-child-abuse-cases/
2022-12-06
GPT-3 | Natural Language Processing algorithm
N/A
worldwide
OpenAI
communication and media
language analysis
private sector
disseminating misinformation
religious discrimination

N/A

2022-XX-XX
N/A

N/A

active use

GPT-3 is an auto-regressive artificial intelligence algorithm developed by OpenAI. It takes billions of language inputs and works as a language prediction model. The model determines the conditional probability of the words that might appear net to it and provides an answer or fills in a sentence(Si et al., 2022). The algorithm is intended to have multiple daily uses but has already raised concerns about its ability to minimize bias (Zhang & Li, 2021).

The algorithm has, in studies, demonstrated primarily gendered and religious bias. In a series of tests to check prompt completion, analogical reasoning and story generation, Muslims were compared with terrorists in 23 percent of cases and where the algorithm was prompted to generate job ads, a majority of the ads generated showed preference for male candidates (Christopher, 2021).

  • Christopher, A. (2021). A Brief Intro to the GPT-3 Algorithm | HackerNoon. Hackernoon.com. https://hackernoon.com/a-brief-intro-to-the-gpt-3-algorithm-t31f37k5
  • Lucy, L., & Bamman, D. (2021). Gender and representation bias in GPT-3 generated stories. 48–55. https://doi.org/10.18653/v1/2021.nuse-1.5
2022-12-06
Algorithm to calculate medical benefits
Idaho
Idaho
N/A
social services
profiling and ranking people
public administration
disability discrimination

N/A

2011-XX-XX
N/A

In 2012, the ACLU brought a suit on behalf of medicaid beneficiaries in Idaho. The court found deep flaws in the training data used to develop the algorithm and determined that the decisions made were not representative of the citizens' actual need.(Brown, 2020)

no longer in use

Prior to 2011, the amount of Medicaid benefits allocated to persons in need in the State of Idaho was assessed by people. Assessors would visit the person and interview them, determining how caretakers would be assigned(Stanley, 2017). In 2011, the use of human assessors was swapped for an algorithmic tool which determines how many hours of help persons in need are allocated(Brown, 2020). The algorithm receives data form an assessor who meets with the beneficiary. The information consists of data pooled from 200 questions asked of the beneficiary(What Happens When an Algorithm Cuts Your Health Care, 2018).

The algorithm then places the beneficiary in three tiers of care with the first tier being the lowest form of support, often services in an outpatient clinic, the second tier providing more assistance and a wider range of services and the third tier providing additional supports which can mean inpatient treatment or 24 hour paid support(Lecher, 2018).

  • Brown, N. J. (2020). Carl Brown (1928–2020). Review of Middle East Studies, 54(2), 318–319. https://doi.org/10.1017/rms.2021.15
  • Lecher, C. (2018, March 21). A healthcare algorithm started cutting care, and no one knew why. The Verge; The Verge. https://www.theverge.com/2018/3/21/17144260/healthcare-medicaid-algorithm-arkansas-cerebral-palsy
2022-12-06
Tengai | Algorithm to interview potential employees
Upplands-Bro municipality (Sweden)
Sweden
Furhat Robotics
labour and employment
evaluating human behaviour
profiling and ranking people
public administration
other kinds of discrimination
manipulation / behavioural change

N/A

2020-XX-XX
N/A

N/A

active use

Furhat Robotics developed a hiring robot named Tengai to make hiring decisions more quickly and in the hopes of reducing bias introduced by human reviewers. Each candidate is asked the same questions in the same order and the robot provides a transcript of the answers to human reviewers who then decide whether to proceed with the application or not.

The robot is provided no information on the applicant's race, gender or other identifying characteristics, and this is touted as it's contribution to making the hiring process more unbiased. The approach is and can be flawed in a number of ways. First, an in-person interview with a rigid, formulaic approach can miss out on points of discussion that may be valuable to determining the employees' fit with the organization. Secondly, it forces candidates to negotiate around the existing technology instead of presenting their contribution to the company. Lastly, the assumption of that introducing technology at the interview stage to lessen bias

  • Misuraca, G., and van Noordt, C.., Overview of the use and impact of AI in public services in the EU, EUR 30255 EN, Publications Office of the European Union, Luxembourg, 2020, ISBN 978-92-76-19540-5, doi:10.2760/039619, JRC120399
  • Savage, M. (2019, March 12). Meet Tengai, the job interview robot who won’t judge you. BBC News. Retrieved from https://www.bbc.com/news/business-47442953
2022-11-29
DALL-E 2 | Algorithm to generate art
N/A
worldwide
OpenAI
communication and media
automating tasks
private sector
manipulation / behavioural change

N/A

2022-XX-XX
N/A

N/A

active use

Algorithms are being increasingly used to generate art. Models receive large databases of images and users can then make specific creations without the traditional model of building landscapes or artistic elements from scratch. DALL-E 2 is a model created by OpenAI. The model is labelled as superior over its predecessor, DALL-E due to a greater accuracy, caption matching and photorealism(OpenAI, 2022).

AI generated art has been criticized for several reasons, primarily, for infringing copyright protections usually afforded to artistic works. If, for instance, an artists work is used in the training dataset, without disclosure or permission, legal complications arise particularly as the resulting images may be used for commercial and other unauthorized purposes(Marcus et al., 2022). The image selection phases may include copyrighted data to train the model to identify tags and labels. Questions are also raised about the resulting generated works and copyright assignation, does it belon

  • Knight, W. (2022, August 19). Algorithms Can Now Mimic Any Artist. Some Artists Hate It. WIRED. https://www.wired.com/story/artists-rage-against-machines-that-mimic-their-work/
  • Marcus, G., Davis, E., & Aaronson, S. (2022). A very preliminary analysis of DALL-E 2. https://arxiv.org/ftp/arxiv/papers/2204/2204.13807.pdf
2022-11-29
Algorithm to predict risk of teenage pregnancy
Salta provincial government (Argentina)
Salta (Argentina)
Argentina
Microsoft
social services
profiling and ranking people
evaluating human behaviour
predicting human behaviour
public administration
socioeconomic discrimination

N/A

N/A
N/A

N/A

not known

Local and federal governments in Argentina continue to invest in a future of algorithmic and databased governance. On the national level, the federal government has developed a database profiling citizens based on socioeconomic data in order to allocate social benefits more efficiently as well as a database that stores civilian biometric data to improve public safety and criminal investigations

In June 2017, the local government of the province of Salta partnered with Microsoft to create and deploy two different predictive tools optimized for identifying teenage pregnancies and school dropouts (Ortiz Freuler and Iglesias 2018). Trained from private datasets made available by the province’s Ministry of Early Childhood, the two systems identify those with the highest risk of teenage pregnancy and dropping out of school, alerting governmental agencies upon determining high-risk subjects. 

  • Kwet, M. (2019). Digital colonialism is threatening the Global South, Al Jazeera, https://www.aljazeera.com/indepth/opinion/digital-colonialism-threatening-global-south-190129140828809.html
  • Ortiz Freuler, J. and Iglesias, C. (2018). Algorithms and Artificial Intelligence in Latin America: A Study of Implementation by Governments in Argentina and Uruguay, World Wide Web Foundation.
2022-11-28
RoboDebt | Algorithm to detect welfare fraud
Australian Government
Australia
N/A
social services
automating tasks
public administration
socioeconomic discrimination
other kinds of discrimination

N/A

2016-07-XX
2019-11-XX

A class action suit was raised by over 10,000 citizens protesting the erroneous debt repayment notices. The class action was settled with the government agreeing to pay financial reparations which included repayments of illegally raised debts, legal fees and interest on the illegally raised debts (Rinta et al., 2021)

Centrelink is the Australian Government's programme to distribute social security payments to citizens and citizen groups such as the unemployed, retirees and indigenous Australians. The programme receives part of its data from the Australian Tax Office which provides information on citizens' income which determines their support payments (Wolf,2020). Citizens are required to report increases in earnings in order to adjust their support payments and where they fail to do so, are considered debtors to the State, having received welfare overpayments.

To align earnings with support payments, Centrelink implemented the Online Compliance Intervention (OCI) programme which would act as an automated calculation tool to sort out discrepancies in payments and earnings. The tool, which was named, Robodebt, in popular media would draw data from Centrelink and ATO and averaged a citizens estimated fortnightly earnings, calculated them against their support payments and calculated potential overpa

  • Rinta-Kahila, T., Someh, I., Gillespie, N., Indulska, M., & Gregor, S. (2021). Algorithmic decision-making and system destructiveness: A case of automatic debt recovery. European Journal of Information Systems, 1–26. https://doi.org/10.1080/0960085x.2021.1960905
  • Sarder, M. (2020, June 5). From robodebt to racism: what can go wrong when governments let algorithms make the decisions. The Conversation. https://theconversation.com/from-robodebt-to-racism-what-can-go-wrong-when-governments-let-algorithms-make-the-decisions-132594
2022-11-28
Schufa | Algorithm to determine credit scores
N/A
Germany
Schufa
business and commerce
predicting human behaviour
public administration
private sector
socioeconomic discrimination

N/A

N/A
N/A

N/A

active use

Schufa, is Germany's leading credit bureau. Schufa's credit scores affect housing, credit card applications and may lead to rejection by internet providers. The algorithm calculates the Schufa score by aggregating data from over 9000 companies and partners on the consumer's financial history ultimately computing a personal score between 0 and 10,000. The consumer's score is available on request to other parties to determine a person's creditworthiness (Crichton, 2019). The algorithm processes the financial data of about 70 million people in Germany, a majority of the people above the age of 18 (Kabushka, 2021).


  • AlgorithmWatch, & Open Knowledge Foundation. (2019). OpenSCHUFA. OpenSchufa. https://openschufa.de/english/
  • Brandimarte, E. (2018). Should we let the algorithms decide? A critical assessment of Automated Decision- Making in Europe. https://www.mayerbrown.com/-/media/files/news/2019/06/brandimartethesis_jun19.pdf
2022-11-28
Svea Ekonomi | Algorithm to assess loan applicants
Svea Ekonomi
Finland
Svea Ekonomi
business and commerce
evaluating human behaviour
profiling and ranking people
private sector
other kinds of discrimination
social polarisation / radicalisation

N/A

N/A
2018-XX-XX

A lawsuit was filed against the company in 2018. The plaintiff had been denied a loan extension as it was discovered that protected characteristics had been used to determine the applicant's creditworthiness. The National Non-Discrimination and Equality Tribunal of Finland ruled that the loan-granting procedure cease to be used and fined the company 100,000 euros (Orwat, 2020).

no longer in use

The company used a score to decide on loan applications. The score took into account gender, language and place of residence as well as proxy data. This led to broad assumptions for instance, men were less likely to be issued with loans, Finnish speaking residents had lower scores that Swedish speaking residents despite actual default rates (European Data Protection Board, 2019). Due to customer complaints the company's credit assessments were investigated and a complaint was filed with the National Non-Discrimination and Equality Tribunal by a plaintiff who alleged discrimination on the basis of his gender and mother tongue (Matzat, 2018).

The court determined that the credit scoring system had taken protected characteristics into account, contrary to the law. Additionally, the use of proxy data was found to be unlawful as it unfairly prejudiced persons in groups regardless of having a good credit history (Orwat, 2020). The plaintiff's credit records had not been taken into account no

  • Orwat, C. (2020). Risks of Discrimination through the Use of Algorithms. https://www.antidiskriminierungsstelle.de/EN/homepage/_documents/download_diskr_risiken_verwendung_von_algorithmen.pdf?__blob=publicationFile&v=1
  • Svea Bank receives a remark and an administrative fine | Finansinspektionen. (2022). Www.fi.se. https://www.fi.se/en/published/sanctions/financial-firms/2022/svea-bank-receives-a-remark-and-an-administrative-fine/
2022-11-28
Algorithm to assess gig workers
Task Rabbit
United States (US)
Task Rabbit
labour and employment
profiling and ranking people
private sector
gender discrimination
racial discrimination

N/A

2008-XX-XX
N/A

N/A

active use

Task rabbit is a service that matches people seeking odd jobs with other people who are looking for work. Founded in 2008, the company assesses the aspiring workers (Rabbits) thereafter allowing them to find odd jobs on the platform. After identifying a job, an aspiring rabbit might face the difficulty of several other requests and then has to engage in a bidding war where they accept a lower fee for their services. The 'rabbits' receive rankings and the rankings contribute to the likelihood of TaskRabbit suggesting them for jobs.

An assessment of the system found evidence that Black and Asian workers received lower ratings, and were recommended less by the algorithm. White men were placed higher in the ranking than Black men and the recommendation algorithm did not correct for the discrepancies and potential bias. When questioned, the company's spokesperson denied the claims of bias stating that the algorithm was driven by user experience.

  • Amer-Yahia, S., Elbassuoni, S., Ghizzawi, A., Borromeo, R., Hoareau, E., & Mulhem, P. (2020). Fairness in Online Jobs: A Case Study on TaskRabbit and Google “Applications” paper. https://doi.org/10.5441/002/edbt.2020.62
  • Bloomberg. (2016, November 22). Discrimination runs rampant throughout the gig economy, study finds. Mashable. https://mashable.com/article/gig-economy-discrimination-bloomberg
2022-11-28
FACE++ | Gender and race detection algorithm
N/A
worldwide
Megvii
identifying images of faces
profiling and ranking people
private sector
public administration
gender discrimination
racial discrimination
state surveillance
weakening of democratic practices

Yes. By Microsoft Research and MIT Media Lab (Buolamwini, J. & al., 2018)

N/A
N/A

N/A

active use

This particular algorithm is part of a bigger service of face recognition algorithms. It was developed by Chinese AI company Megvii to expand its services. The algorithm analysis faces to detect their gender and race. Apart from issues with gender and racial bias in its efficacy (Buolamwini, & al., 2018) there is mixed evidence that it may have been used to aid human rights abuse by the Chinese government against the Uyghur population (Alper & Psaledakis, 2021; Bergen, 2019; Dar S., 2019). It is unknown what, if any, the response of the company has been.

  • Alper, A. & Psaledakis, D. (2021). "U.S. curbs Chinese drone maker DJI, other firms it accuses of aiding rights abuses". Reuters, 17 December 2021. https://www.reuters.com/markets/us/us-adds-more-chinese-firms-restricted-entity-list-commerce-2021-12-16/ [last seen March 16 2022]
  • Bergen, M. (2019). "Trump's Latest China Target Includes a Rising Star in AI". Bloomberg, 24 May 2019. Available on: https://www.bloomberg.com/news/articles/2019-05-24/trump-s-latest-china-target-includes-a-rising-star-in-ai [last seen March 16 2022]
2022-11-22
SKALA | Predictive policing within a specific location
North Rhine-Westphalian State Office of Criminal Investigations
Germany
North Rhine-Westphalian State Office of Criminal Investigations
policing and security
predicting human behaviour
public administration
other kinds of discrimination

N/A

2015-XX-XX
N/A

N/A

active use

The State Office of Criminal Investigations in North Rhine-Westphalia developed, in 2015, a predictive policing program in line with six other major police authorities. The algorithm was designed to function on the basis of burglary risk within a certain area in each district in the police precincts (Egbert, 2018). The algorithm utilizes crime data and socio economic data to assess the likelihood of a spatial clustering of criminal offences. The model also uses potential future data to determine areas that may eventually have a higher risk of crim than others within a specific time limit of seven days (Gerstner, 2018). The algorithm is in use in 16 major police departments and is the largest predictive policing solution in Germany (Seidensticker, 2021).

Predictive policing situations are under heavier and heavier scrutiny for their claims of fairness. The primary concern is that it over-datafies persons in a certain area or belonging to a certain group and their overrepresentation in

  • Brownsword, R., & Harel, A. (2019). Law, liberty and technology: criminal justice in the context of smart machines. International Journal of Law in Context, 15(2), 107–125. Available on https://doi.org/10.1017/s1744552319000065 [last visited on 19 October 2022]
  • Carvalho, A., & Pedrosa, I. (2021). Data Analysis in Predicting Crime : Predictive Policing. 2021 16th Iberian Conference on Information Systems and Technologies (CISTI). Available on https://doi.org/10.23919/cisti52073.2021.9476393 [last visited on 19 October 2022]
2022-10-19
AVATAR | Lie detector algorithm at border checkpoints
N/A
United States (US)
Discern Science
policing and security
predicting human behaviour
public administration
disability discrimination
social polarisation / radicalisation

N/A

N/A
N/A

N/A

being tested

The US Department of Homeland security and the EU Border Agency Frontex funded development of the Automated Virtual Agent for Truth Assessments in Real Time (AVATAR) which was developed by the National Center for Border Security and Immigration (BORDERS) headquartered at the University of Arizona, eventually forming Discern Science (Salmanowitz, N. 2018). The objective of the algorithm was to determine credibility of persons crossing borders by screening and interviewing them. The algorithm was field tested at airports in Romania and Arizona. (Salmanowitz, N. 2018)

The concept behind the system is to identify ‘deceiving’ indicators from facial expressions, audio clues and phrasing. The traveler is asked a number of questions by the algorithm’s virtual agent and claims to pick up up to 50 potential deception cues (Hodgson, C. 2019). The algorithm interprets the data form the responses and sends a verdict to a human patrol agent recommending the traveler either proceed or be questioned f

  • Berti, A. (2020). Finding the truth: will airports ever use lie detectors at security? - Airport Industry Review . Available on https://airport.nridigital.com/air_may20/airport_security_lie_detectors [last visited on 18 October 2022]
  • DEMIREL, Ö. (2019). Parliamentary question | Lie detectors in the area of border security . European Parliament. Available on https://www.europarl.europa.eu/doceo/document/E-9-2019-002653_EN.html [last visited on 18 October 2022]
2022-10-19
RealPage® AI Screening | AI-based screening algorithm for the multifamily apartment rental industry
RealPage
United States (US)
RealPage Resident Screening and Data Science teams
business and commerce
compiling personal data
predicting human behaviour
private sector
gender discrimination
racial discrimination

N/A

2019-06-26
N/A

N/A

active use

RealPage® AI Screening is an AI-based screening algorithm built specifically for the multifamily apartment rental industry. The solution, developed by RealPage Resident Screening and Data Science teams enables property management companies to identify high-risk renters with greater accuracy (RealPage Blog, 2019). RealPage developed industry-specific insights to determine the willingness to pay rent. Analyzing an applicant’s capability and willingness to pay rent, it is a risk assessment model to predict a renter’s financial performance (Yau, N, 2022).

RealPage AI Screening is made possible with the pairing of data science and machine learning techniques utilizing more than thirty million actual lease outcomes to evaluate renter performance over the course of a lease(RealPage Blog, 2019). AI Screening also incorporates granular third-party consumer financial data to better predict applicant risk. Screening results are available within seconds, not hours or days as with traditional mode

  • Rosen, E., Garboden, P., & Cossyleon, J. (2021). Racial Discrimination in Housing: How Landlords Use Algorithms and Home Visits to Screen Tenants. American Sociological Review86(5), 787-822. Available on https://doi.org/10.1177/000312242110 [last visited on 19 October 2022]
  • RealPage Releases AI Screening. RealPage Blog. (2019). Available on https://www.realpage.com/news/realpage-releases-ai-screening/ [last visited on 19 October 2022]
2022-10-18
Dias | Using voice analysis to determine an asylum seeker’s origin
German Federal Office for Migration and Refugees
Germany
German Federal Office for Migration and Refugees
policing and security
language analysis
public administration
weakening of democratic practices
other kinds of discrimination

N/A

2017-XX-XX
N/A

N/A

active use

The German Federal Office for Migration and Refugees (Bundesamt für Migration und Flüchtlinge, Bamf) developed a language recognition software, Language and dialect identification assistance system (DIAS) to determine country of origin from the speakers’ language. The determination is intended to assess the asylum’ seeker’s truthfulness about the application.

Procedurally, a staff member of the BAMF calls an internal phone number and enters the asylum seeker’s ID data, the asylum seeker is then invited to verbally describe a specific picture over the phone after which the recorded description is labelled as the asylum-seeker’s country-specific language sample. The data is stored in a central file repository and analyzed by centrally hosted language biometrics software (Siccardo, A. 2022).

  • Cambier-Langeveld, T. (2018). Forensic Linguistics Asylum-seekers, Refugees and Immigrants. Germanic Society for Forensic Linguistics (GSFL) . Available on https://vernonpress.com/index.php/file/6087/e5d6c3d2be345c43ef24ff06a48a6087/1529040692.pdf [last visited on19 October 2022]
  • In P. L. Patrick, M. S. Schmid, & K. Zwaan (2019) Language Analysis for the Determination of Origin. (2019). Language Policy. Springer International Publishing. Available on https://doi.org/10.1007/978-3-319-79003-9 [last visited on19 October 2022]
2022-10-18
Individual-oriented crime prediction: Offender Assessment System (OASys)
UK Ministry of Justice
United Kingdom (UK)
N/A
justice and democratic processes
evaluating human behaviour
profiling and ranking people
public administration
socioeconomic discrimination

N/A

2001-XX-XX
N/A

N/A

active use

The OASys system was developed from 1999-2001 in a series of test studies and was updated and unified in 2013 in the United Kingdom with the aim of developing a better management system for offender management (National Offender Management Service, 2009). The system was approved for use by Community Rehabilitation Companies and forms a part of the National Probation System. The tool is used with all adult offenders and is said to be a predictor of reoffending within one and two years based on a set of variables including age, gender, and number of previous sanctions (Wendy Fitzgibbon, 2008).

The system classifies individual factors such as personality and temperament, societal influences, and offender demographic information to group the offender into two categories of predictive reoffending. The first is the general reoffending Predictor (OGP1) and the second is the OASys violence predictor (OVP1). The system also makes a separate Risk of Serious Harm (RoSH) assessment which determin

  • Jacobson, J. (2022). Algorithmic risk assessment tools in criminal proceedings. Uppsala Universitet. https://www.diva-portal.org/smash/get/diva2:1683810/FULLTEXT01.pdf [last visited on 28 September 2022]
  • Ministry of Justice. (2013, December 12). Offender Assessment System (OASys). Www.data.gov.uk. https://www.data.gov.uk/dataset/911acd3c-495f-48ca-88b6-024210868b06/offender-assessment-system-oasys [last visited on 10 October 2022]
2022-10-10
Buona Scuola | Algorithm to determine allocation of school teachers
Italy's Ministry of Education
Italy
HP Italia
Finmeccanica
education and training
profiling and ranking people
public administration
other kinds of discrimination

Yes. A technical audit was ordered by Lazio's Regional Administrative Court and was carried out by La Sapienza and Tor Vergata Universities. The algorithm was audited to assess its structure and functions. The audit revealed that there were several inefficiencies and inscrutable functions of the algorithm. The determination was that the algorithm was not suitable for its purpose based on a fragmented development process with different standards, an outdated language and poor explainability(Salvucci, Giorgi, Barchiesi, & Matteo, 2019).

2017-XX-XX
2019-XX-XX

Several lawsuits were filed challenging the outcomes of the algorithm and requesting re-assignation. Primarily the biggest suit that challenged the algorithm was decided in favour of the teachers and reassigned them(Chiusi, 2020).

no longer in use

In 2015, the then Government of Italy introduced educational reforms aiming at improving education service delivery. As part of the reforms an algorithm was developed by HP Italia and Finmeccanica to assign teachers to vacant positions based on location preference, teacher's scores in merit-based tests and the existing vacancies(Chiusi, 2020).

The rollout of the algorithm sparked controversy. First, several thousand teachers signed a petition citing unfair treatment by the algorithm, a lack of information on the algorithm's functionality and several potential legal violations including of the GDPR(Conde Nast, 2021). A technical audit was conducted and found the algorithm to be severely lacking in logic and practicality and remarked that it was difficult to assess the algorithm based on different stages of development(Salvucci et al., 2019).

  • Chiusi, F. (2020). ITALY Automating Society Report 2020. AlgorithmWatch. Available at https://algorithmwatch.org/en/automating-society-2019/italy/ [last visited on 10 October 2022]
  • Conde Nast. (2021, March 4). Perché non siamo ancora pronti per affidare agli algoritmi la pubblica amministrazione. Wired Italia. Available at https://www.wired.it/attualita/tech/2021/03/04/pubblica-amministrazione-algoritmi-piano-colao/ [last visited on 10 October 2022]
2022-10-10
Predict-Align-Prevent | Algorithm to identify children at risk of maltreatment
New Jersey (US)
Oregon (US)
New Hampshire (US)
Washington D.C (US)
Virginia (US)
Arkansas (US)
United States (US)
Predict Align Prevent
social services
predicting human behaviour
public administration
racial discrimination
socioeconomic discrimination
state surveillance

N/A

2017-XX-XX
N/A

N/A

active use

The algorithm was developed by Predict Align Prevent; a non-profit company registered in Texas. The company primarily uses geospatial risk analysis to prevent child maltreatment and address high risk areas where child maltreatment might occur (Predict-Align-Prevent ,2022). The algorithm use is divided into three stages. The ‘predict’ phase involves using geospatial predictive risk modelling to narrow down areas of concern for child abuse and neglect. The ‘align’ phase uses the information provided to identify strategic community collaborators to get in contact with and provide more resources on the ground to properly distribute child welfare services (ACLU 2021).

During the ‘prevent’ phase, the algorithm aims to generate data to assess the effectiveness of prevention programs deployed and to inform future prevention efforts. The algorithm has, for example, been deployed in New Hampshire as part of a collaboration with the New Hampshire Department of Health and Human Services (DHHS). T

  • ACLU. (2021). Family Surveillance by Algorithm. Retrieved September 28, 2022, from American Civil Liberties Union website: available on https://www.aclu.org/fact-sheet/family-surveillance-algorithm [Last visited on 28 September 2022]
  • Amos, J., Fletcher, C., Humphries, M., Lichtle, A., Ragland, R., Rogers, A., … Daley, D. (2020). 2020 Predict Align Prevent Data Sharing Initiative Capstone Project Team Members, available on https://papreports.org/assets/documents/Capstone_PAPP.pdf [last visited on 28 September 2022]
2022-10-10
PRECIRE | Algorithm to screen employee candidates
N/A
worldwide
PRECIRE
business and commerce
evaluating human behaviour
language analysis
automating tasks
private sector
disability discrimination
other kinds of discrimination

N/A

2012-XX-XX
N/A

N/A

active use

The PRECIRE algorithm was designed by the German company, Precire to provide hiring assistance to companies. The algorithm was trained on a dataset of which no demographic information has been given. The company boasts 38 million text evaluations from 25,000 participants. Candidates are instructed to answer questions over video or telephone and upload the recording to be parsed by the algorithm (PRECIRE,2020).

The algorithm extracts information based on their facial expressions, language, and prosodic information. This results in a personality profile that ‘predicts’ future customer relation and employee communication skills (Schick & Fischer, 2021). The company claims that the basis of the algorithm is driven by psychological studies which make communication “objectively predictable and measurable.”(PRECIRE, 2020)

  • BigBrother. (2019). Communication: Precire Technologies | BigBrotherAwards. Bigbrotherawards.de. Available at https://bigbrotherawards.de/en/2019/communication-precire-technologies [last visited on 10 October 2022]
  • CTGN. (2018). German firm employs AI for recruitment. News.cgtn.com. Available at https://news.cgtn.com/news/356b544f326b7a6333566d54/share_p.html
2022-10-10
HART - Algorithm to determine recidivism
Durham Constabulary
United Kingdom (UK)
Durham Constabulary
policing and security
evaluating human behaviour
public administration
racial discrimination
socioeconomic discrimination
gender discrimination

N/A

2017-XX-XX
N/A

N/A

active use

Durham Constabulary developed its HART tool for five years in collaboration with researchers from Cambridge University. The tool uses a random forest modelling system to determine the risk of reoffending (Burgess, 2018). The algorithm works as a support to the custody officers who ultimately make the final decision but are guided by the algorithm’s recommendation (Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016). The algorithm was developed as part of a wider ‘Checkpoint’ programme which offers an alternative to incarceration by providing social support to offenders at a moderate risk of reoffending.

The algorithm utilises information on past offending history and proxy data such as age, gender and postcode to sort offenders into three categories: low-risk, moderate-risk and high-risk of committing new serious offences over the following two years. Those in the medium-risk category are classified as ‘likely to commit a non-serious offence’(Oswald, Grace, Urwin, & Barnes, 2018). In

  • Burgess, M. (2018). UK police are using AI to inform custodial decisions – but it could be discriminating against the poor. Retrieved from Wired UK website: https://www.wired.co.uk/article/police-ai-uk-durham-hart-checkpoint-algorithm-edit
  • Committee on Legal Affairs and Human Rights. (2020). Committee on Legal Affairs and Human Rights Justice by algorithm -the role of artificial intelligence in policing and criminal justice systems Report *. Retrieved from https://assembly.coe.int/LifeRay/JUR/Pdf/DocsAndDecs/2020/AS-JUR-2020-22-EN.pdf
2022-08-15
NDAS - Individual-oriented crime prediction
West Midlands Police
United Kingdom (UK)
West Midlands Police
policing and security
evaluating human behaviour
public administration
socioeconomic discrimination
other kinds of discrimination

N/A

2019-XX-XX
N/A

N/A

active use

The National Data Analytics Solution (NDAS) is an algorithmic intervention system piloted by West Midlands police in collaboration with London’s metropolitan Police, Greater Manchester Police and six other police forces intended to predict a person’s risk of committing a crime in future with the aim of intervening before the crime is committed (Mittelstadt et al.). The system uses data relating to around five million individuals with 1400 indicators related to crime prediction which allow the system to specify if a person is likely to commit a violent crime or become the victim of one (Zillka, Weller, & Sargeant, 2022).

The system is primarily composed of three elements: Insight Search (a mobile based search engine, Business insights (which includes officer well-being) and Insights Lab which develops policing analytics tools (Committee on Legal Affairs and Human Rights, 2020). The use of this system is questionable particularly raising concerns about consent mechanisms for data use in

  • Committee on Legal Affairs and Human Rights. (2020). Committee on Legal Affairs and Human Rights Justice by algorithm -the role of artificial intelligence in policing and criminal justice systems Report *. Retrieved from https://assembly.coe.int/LifeRay/JUR/Pdf/DocsAndDecs/2020/AS-JUR-2020-22-EN.pdf
  • Heubl, B. (2019, September 24). West Midlands Police strive to get offender prediction system ready for implementation. Retrieved from eandt.theiet.org website: https://eandt.theiet.org/content/articles/2019/09/ai-offender-prediction-system-at-west-midlands-police-examined/
2022-08-15
imosphere FACE RAS
UK Department of Health and Social Care
United Kingdom (UK)
Imosphere
social services
making personalised recommendations
public administration
socioeconomic discrimination
age discrimination

N/A

2012-XX-XX
N/A

N/A

active use

The Resource Allocation System (RAS) is used to support budgeting decisions for social care needs. The software takes information about individual needs and circumstances and then provides a budgetary allocation. The claim is then processed by the relevant Social Security Department provides income support on different levels of care (Health and Social Security Scrutiny Panel, 2018). The system, developed by imosphere is currently in use by around 40 local authorities across England (Alston, 2018). The system, intended to speed up assessment processes for social care has led to reduced contact between social workers and people who have care needs.

Ultimately it adds a layer of disconnect which also prevents the social workers from assessing non-budgetary support needs (Health and Social Security Scrutiny Panel, 2018). Additionally the system is not sensitive to inflation adjustments, and preference related adjustments for instance the needs of older persons as opposed to the needs of

  • Alston, P. (2018). SUBMISSION TO THE UN SPECIAL RAPPORTEUR ON EXTREME POVERTY AND HUMAN RIGHTS AHEAD OF UK VISIT NOVEMBER 2018. Retrieved from https://bigbrotherwatch.org.uk/wp-content/uploads/2020/02/BIG-BROTHER-WATCH-SUBMISSION-TO-THE-UN-SPECIAL-RAPPORTEUR-ON-EXTREME-POVERTY-AND-HUMAN-RIGHTS-AHEAD-OF-UK-VISIT-NOVEMBER-2018-II.pdf
  • Health and Social Security Scrutiny Panel. (2018). The Long-Term Care Scheme. Retrieved August 15, 2022, from statesassembly.gov.je website: https://statesassembly.gov.je/ScrutinyReports/2018/Report%20-%20Long-Term%20Care%20Scheme%20-%2028%20March%202018.pdf
2022-08-15
Xantura RBV
United Kingdom Ministry of Housing, Communities and Local Government
United Kingdom (UK)
Xantura
social services
automating tasks
making personalised recommendations
public administration
socioeconomic discrimination
other kinds of discrimination

N/A

N/A
N/A

N/A

active use

Xantura is a UK technology firm which claims to provide risk-based verification for social services processes. The company has developed an algorithm to process claims for Housing Benefits and/or Council Tax Reduction across several districts in the United Kingdom (Booth, 2021). The algorithm is deployed to assess the likelihood of fraud and error in benefit and tax reduction claims and was introduced to increase the efficiency of the application process (shepwayvox, 2021).

The algorithm uses over 50 variables which include the type of Claim, previous claims, number of child dependants and percentile group of statutory sick pay (Hurfurt, 2021). The company has been accused of contributing to growing discrimination as the algorithm recommends council benefit claimants for tougher social security checks and processes applications for low-risk applicants faster.

  • Booth, R. (2021, July 18). Calls for legal review of UK welfare screening system which factors in age. Retrieved August 16, 2022, from the Guardian website: https://www.theguardian.com/society/2021/jul/18/calls-for-legal-review-of-uk-welfare-screening-system-that-factors-in-age
  • Hurfurt, J. (2021). About Big Brother Watch. Retrieved from https://shepwayvox.org/wp-content/uploads/2021/07/Poverty-Panopticon.pdf
2022-08-08
NarxCare | Algorithm to monitor opioid abuse
N/A
United States (US)
Appriss Health
health
product safety
predicting human behaviour
evaluating human behaviour
public administration
private sector
disability discrimination
other kinds of discrimination

N/A

2018-XX-XX
N/A

N/A

active use

NarxCare is an algorithm used to identify patients at risk of substance misuse and abuse. The algorithm uses the multi-state Prescription Drug Monitoring Program (PMDP) data which contains years of patient prescription data from providers and pharmacies to determine a patients risk score for misuse of narcotics, sedatives and stimulants (Belmont Health Law Journal, 2020).

The score ranges from 000-999 with higher scores indicating a higher chance for the patient to abuse a prescription. The risk score is presented in the form of a 'Narx Report' which includes the risk score, any red flags in the patient's prescription (which may put them in danger of an unintentional overdose) or other adverse events. The score is available to pharmacists, doctors and hospitals and is intended to assist them in identifying patients who might be at risk for substance abuse (Ray, 2021).

  • Belmont Health Law Journal. (2020). NarxCare, Pharmacies Way of Tracking Opioid Usage of Patients. What You Need to Know . Belmont Health Law Journal. Available on Www.belmonthealthlaw.com. https://www.belmonthealthlaw.com/2020/02/04/narxcare-pharmacies-way-of-tracking-opioid-usage-of-patients-what-you-need-to-know/ [Last visited on 22 June 2022]
  • Ray, T. (2021, July 6). United in Rage | Tarence Ray. The Baffler. Available on https://thebaffler.com/salvos/united-in-rage-ray?utm_source=pocket_mylist [Last visited on 22 June 2022]
2022-06-23
Strategic Subject List | Algorithm to predict risk of violent offenses
Chicago Police Department
Chicago (US)
Chicago Police Department
policing and security
profiling and ranking people
public administration
socioeconomic discrimination
other kinds of discrimination

N/A

2012-XX-XX
2020-XX-XX

N/A

no longer in use

The Strategic Subject List is a predictive policing approach which was deployed in Chicago and analysed several factors including criminal histories, gang affiliation and similar factors to determine the risk of their involvement in violence. The score ranked between zero and five hundred and was made available to street police officers(Posadas, 2017).

The SSL was intended to identify persons likely to be both victims and offenders and according to the Chicago Police Department in 2017, it was used as a mere risk calculation tool and that it did not place targets on citizens for arrests. Despite this assertion, concerns surrounded the misuse of the tool with press releases on persons who were arrested, were accompanied with their SSL scores. The tool was also intended to provide social services as a mitigation tool for violent crimes however data access requests revealed that the use of the SSL prioritised arrests instead of social support interventions(Kunichoff & Sier, 2017).

  • Kunichoff, Y., & Sier, P. (2017). The Contradictions of Chicago Police’s Secretive List. Chicago Magazine. Available on https://www.chicagomag.com/city-life/August-2017/Chicago-Police-Strategic-Subject-List/ [Last visited on 20 June 2022]
  • Open Data Network. (2020). Strategic Subject List - Historical. Www.opendatanetwork.com. Available on https://www.opendatanetwork.com/dataset/data.cityofchicago.org/4aki-r3np [Last visited on 21 June 2022]
2022-06-22
Smart Pricing | Airbnb’s algorithm to automatically adjust the daily price of a property’s listing
Airbnb
worldwide
Airbnb
business and commerce
automating tasks
private sector
racial discrimination

Yes

2015-XX-XX
N/A

N/A

active use

Smart Pricing is an algorithm created by Airbnb to automatically adjust the daily price of a property’s listing. This free tool was introduced in November 2015. This algorithm bases its prices on features like the type and location of a listing, seasonality, demand and other factors that influence the demand on the property. The algorithm is expected to be more effective than human price setting as it has access to Airbnb servers containing large amount of data. However, the opacity of the algorithm makes it difficult to access, and thus, the algorithm does not guarantee benefit to the host. (Sharma, M. 2021)


  • Zhang, S., Mehta, N., Singh, P. V., & Srinivasan, K. (2021). Frontiers: Can an AI Algorithm Mitigate Racial Economic Inequality? An Analysis in the Context of Airbnb. Informs Marketing Science. Available on: https://doi.org/10.1287/mksc.2021.1295 [last visited on 20 June 2022]


2022-06-22
Algorithm to detect targets in military operations
U.S. Army
United States (US)
Project Maven
policing and security
automating tasks
public administration
weakening of democratic practices
other kinds of discrimination

N/A

N/A
N/A

N/A

being tested

In a bid to deploy artificial intelligence in military operations, an algorithm developed to identify strike targets is in development by the U.S. Army. The algorithm is intended to make it easier for the Army to find targets using satellite images and geolocation data. The algorithm was tested in an experimental program which tested its ability to find a missile at an angle. In the initial test the algorithm had a high success rate (Defense One, 2021).

However, in a subsequent test, where the algorithm was supposed to identify multiple missiles at a separate angle, it was unable to do so. Worse still, the algorithm evaluated its own performance at a 90% success rate when in reality, it was much closer to 25%. The development of algorithms intended to be used for militaristic purposes must be especially critiqued. If not vetted properly, with accurate training data which mimics the fragility of wartime conditions, algorithms can prove fatal (AirforceMag, 2021).

  • AirforceMag. (2021, September 21). AI Algorithms Deployed in Kill Chain Target Recognition. Air Force Magazine. Available on https://www.airforcemag.com/ai-algorithms-deployed-in-kill-chain-target-recognition/ [Last visited on 22 June 2022]
  • Defense One. (2021). This Air Force Targeting AI Thought It Had a 90% Success Rate. It Was More Like 25%. Defense One. Available on https://www.defenseone.com/technology/2021/12/air-force-targeting-ai-thought-it-had-90-success-rate-it-was-more-25/187437/ [Last visited on 22 June 2022]
2022-06-22
ISAAK | Employee behaviour analysis
N/A
United Kingdom (UK)
StatusToday
labour and employment
profiling and ranking people
private sector
threat to privacy
socioeconomic discrimination

N/A

2019-XX-XX
N/A

N/A

not known

[Updated version, 9 June 2022]

Isaak is an AI software launched in 2019 and developed by British company StatusToday, which was bought by Italian company Glickon in June 2020 (Modi, 2020; Glickon, 2022). Isaak aimed at providing employee behavioural analytics to employers and management.

  • Booth, R. (2019). UK businesses using artificial intelligence to monitor staff activity. The Guardian, 7 April 2019. Available on https://www.theguardian.com/technology/2019/apr/07/uk-businesses-using-artifical-intelligence-to-monitor-staff-activity [last visited on 18 May 2022]
  • Glickon (2020). AI Analytics platform StatusToday acquired by Glickon. Official Glickon blog, 11 June 2020. Available on https://blog.glickon.com/ai-analytics-platform-statustoday-acquired-by-glickon/ [last visited on 9 June 2022]
2022-06-09
EAB| University enrollment algorithm
N/A
United States (US)
Navigate
education and training
automating tasks
predicting human behaviour
private sector
racial discrimination
socioeconomic discrimination

N/A

N/A
N/A

N/A

active use

The EAB algorithm, developed by Navigate is deployed among several major universities in the US. The company deploys algorithms which enables financial aid offices in universities to design strategies for how much funding is made available to scholarship applicants. EAB considers several factors and data points in optimizing their models including academic profile, demographic information, standardized test scores, where they live and similar data (Engler,2021).

Ultimately, the algorithm determines which students are more likely to accept an offer from a higher learning institution as well as to determine how revenue can be raised for the institution from higher tuition and targeted scholarships. Additionally, deploying the algorithm is intended to reduce the manual labour performed by admissions office employees in processing several hundreds or thousands of applications (The Regulatory Review, 2022).

  • Engler, A. (2021). Enrollment algorithms are contributing to the crises of higher education. Brookings. Available on https://www.brookings.edu/research/enrollment-algorithms-are-contributing-to-the-crises-of-higher-education/ [Last visited on 3 June 2022]
  • Review, T. R. (2022, February 2). How Enrollment Algorithms Worsen the Student Debt Crisis. The Regulatory Review. Available on https://www.theregreview.org/2022/02/02/ross-enrollment-algorithms-worsen-student-debt-crisis/ [Last visited on 3 June 2022]
2022-06-09
Algorithm to reduce the length of stay for patients in hospitals
N/A
N/A
University of Chicago Medicine's (UCM) data analytics group
health
profiling and ranking people
private sector
racial discrimination

N/A

N/A
N/A

N/A

not implemented

At the end of 2017, Dr. John Fahrenbach, a data scientist at the University of Chicago Medicine (UCM), developed a machine learning model that used clinical characteristics to identify patients most suitable for discharge after 48 hours. Using this tool, the hospital could decrease the length of stay for patients by allocating and prioritizing care management resources, including discharge planning, home health services, and clinical or patient administrative assistant.

During the development process, Dr. Fahrenbach’s team determined that including zip codes as a feature increased the model’s predictive accuracy. However, trends in the data showed that patients from predominantly Caucasian and relatively affluent zip codes were more likely to have shorter lengths of stay. Conversely, zip codes correlated to longer lengths of stay were predominantly African American and limited income.

  • R. (2018, December 7). How algorithms can create inequality in health care, and how to fix it. University of Chicago News. Available on: https://news.uchicago.edu/story/how-algorithms-can-create-inequality-health-care-and-how-fix-it [last visited on 01 June 2022]


2022-06-02
Algorithm for estimating kidney function
U.S. medical laboratories
United States (US)
The Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) research group
health
profiling and ranking people
private sector
racial discrimination

Yes

2009-XX-XX
2021-XX-XX

N/A

no longer in use

Glomerular filtration rate (GFR) is considered the best overall index to estimate kidney function. In 2009, CKD-EPI research group developed an upgraded version of the GFR to increase its accuracy. CKD-EPI’s algorithm converts the results of a blood test for a person’s level of the waste product creatinine into a measure of kidney function named GFR. This index is estimated from an algorithm that uses serum creatinine, age, race, sex and body size as variables. ( Levey, A. S., et. al. 2009). The obtained score is a measure that indicates the severity of a person’s disease and guides health professionals on what care they should receive, for example to refer someone to a kidney specialist or refer them for a kidney transplant.


  • Simonite, T. (2020, October 26). How an Algorithm Blocked Kidney Transplants to Black Patients. Wired. Available on: https://www.wired.com/story/how-algorithm-blocked-kidney-transplants-black-patients/ [last visited on 13 May 2022]
  • Ahmed, S., Nutt, C. T., Eneanya, N. D., Reese, P. P., Sivashanker, K., Morse, M., Sequist, T., & Mendu, M. L. (2020). Examining the Potential Impact of Race Multiplier Utilization in Estimated Glomerular Filtration Rate Calculation on African-American Care Outcomes. Journal of General Internal Medicine. 15 October 2020. Available on: https://link.springer.com/article/10.1007/s11606-020-06280-5[last visited on 13 May 2022]
2022-06-01
InstaJobs | Algorithm for matching job candidates with opportunities
LinkedIn
worldwide
LinkedIn
labour and employment
profiling and ranking people
private sector
gender discrimination

Yes

Two external audits were conducted by Linkedin workers in which this algorithm presents bias ( Yu,Y., & Saint-Jacques, G. (2022) and Roth, J., Saint-Jacques, G., & Yu,Y., (2021)). In both audits it was used an outcome test of discrimination for ranked lists, eventhough InstaJob is a classification algorithm. For one audit, they gather data on the InstaJobs algorithm scores for approximately 193,000 jobs. In both cases, they used the scores constructed by the algorithm for each job to create a listwise ranking of the candidates, and apply outcome tests to this ranked data.

2018-XX-XX
N/A

N/A

active use

Years ago, LinkedIn discovered that their InstaJobs algorithm, a recommendation algorithm that LinkedIn uses to notify candidates of job opportunities, was biased. 

InstaJobs to predict the probability the candidate will apply for the job as well as the probability the application will receive attention from the recruiter, uses features about the job posting and the candidates. The algorithm is designed to understand the preferences of both parties and to make recommendations accordingly. It generates a predicted score based on the features mentioned above for each candidate, and sends notifications to candidates above a threshold. 

  •  Wall, S. (2021, June 25). LinkedIn’s job-matching AI was biased. The company’s solution? More AI. MIT Technology Review. Available on: https://www.technologyreview.com/2021/06/23/1026825/linkedin-ai-bias-ziprecruiter-monster-artificial-intelligence/ [last visited on 22 May 2022]
  • Yu,Y., & Saint-Jacques, G. (2022). Choosing an algorithmic fairness metric for an online marketplace: Detecting and quantifying algorithmic bias on LinkedIn. Available on: https://arxiv.org/abs/2202.07300 [last visited on 22 May 2022]
2022-06-01
PageRank (PR) | algorithm used by Google Search to rank web pages in their search engine results
Google Inc.
worldwide
Larry Page and Sergey Brin
business and commerce
communication and media
automating tasks
predicting human behaviour
private sector
racial discrimination
religious discrimination
social polarisation / radicalisation
disseminating misinformation

Yes, there is a study on auditing PageRank on large graphs by Kang et al., 2018 which states that important as it might be, state-of-the-art lacks an intuitive way to explain the ranking results by PageRank (or its variants), e.g., why it thinks the returned top-k webpages are most important ones in the entire graph; why it gives a higher rank to actor John than actor Smith in terms of their relevance w.r.t. a particular movie? In order to answer these questions, the paper proposes a paradigm shift for PageRank, from identifying which nodes are most important to understanding why the ranking algorithm gives a particular ranking result. The authors formally define the PageRank auditing problem, whose central idea is to identify a set of key graph elements (e.g., edges, nodes, subgraphs) with the highest influence on the ranking results. They formulate it as an optimization problem and propose a family of effective and scalable algorithms (AURORA) to solve it.

There is also a study on au

1998-09-01
2019-09-24

N/A

no longer in use

Larry Page and Sergey Brin developed PageRank at Stanford University in 1996 as part of a research project about a new kind of search engine. PageRank has been implemented by Google Inc. from 1st September 1998 to 24th September 2019 to provide the basis for all of Google's web-search tools. The algorithm has been used by Google Search to rank web pages in their search engine results ("PageRank - Wikipedia", 2022). Potential social impacts of the algorithm are racial discrimination, religious discrimination, social polarisation/radicalisation, disseminating misinformation. The PageRank algorithm outputs a probability distribution used to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for collections of documents of any size. It is assumed in several research papers that the distribution is evenly divided among all documents in the collection at the beginning of the computational process. The PageRank comp

  • Kang, J., Tong, H., Xia, Y., & Fan, W. (2018). AURORA: Auditing PageRank on Large Graphs. Retrieved 30 May 2022, from https://www.researchgate.net/publication/323770852_AURORA_Auditing_PageRank_on_Large_Graphs.
  • PageRank - Wikipedia. En.wikipedia.org. (2022). Retrieved 30 May 2022, from https://en.wikipedia.org/wiki/PageRank#:~:text=PageRank%20(PR)%20is%20an%20algorithm,the%20importance%20of%20website%20pages.
2022-05-30
Algorithm to detect potential violent encounters
N/A
United States (US)
Athena Security
policing and security
evaluating human behaviour
private sector
public administration
threat to privacy

N/A

N/A
N/A

N/A

being tested

Athena security focuses on the use of emerging technologies to assist in security operations. The company focuses on developing crime- alert tools with are trained using thousands of videos simulating weapon related activities in several public and private places. The algorithm is built with the purpose of scanning an area, detecting a weapon and raising a red flag to the relevant security officers (Defense One, 2019).

Despite the premise of the algorithm and its intended use to build more safety and security, several concerns abound about the use of such recognition algorithms. First there is no nuance about the operation of weapons in a legal context particularly where greater calls for armed security guards in schools are being raised or where another person exercises their right to carry a weapon. Secondly there have been instances of misidentification of items which have led to persons being injured or killed due to security officers acting without exercising full judgement (ELear

  • Builtinaustin. Athena Security Raises $5.5M to Spot Weapons with AI | Built in Austin. Available on www.builtinaustin.com/2019/06/13/athena-security-raises-5m-seed[last visited on 27 May 2022]
  • Defense One. “Here Come AI-Enabled Cameras Meant to Sense Crime before It Occurs.” Defense One, 2019, Available on www.defenseone.com/technology/2019/04/ai-enabled-cameras-detect-crime-it-occurs-will-soon-invade-physical-world/156502/ [last visited on 27 May 2022].
2022-05-27
Algorithm to select potential employees
Twitter
Google (owned by Alphabet)
Amazon
Facebook (owned by Meta)
United States (US)
Luca Bonmassar
labour and employment
compiling personal data
profiling and ranking people
private sector
gender discrimination
age discrimination

N/A

2013-XX-XX
N/A

N/A

no longer in use

An online tech hiring platform, Gild, enables

employers to use ‘social data’ (in addition to other resources such as resumes) to rank candidates by social capital. Essentially ‘social data’ is a proxy that refers to how integral a programmer is to the digital community drawing from time spent sharing and developing code on development platforms such as GitHub (Richtel, 2013).

  • Logg, J. M. (2017). Theory of Machine: When Do People Rely on Algorithms? Harvard Business School Working Paper Series # 17-086. Available at https://dash.harvard.edu/handle/1/31677474 [Last visited on 27 May 2022]
  • Njoto, S. (2020). Gendered Bots? Bias in the use of Artificial Intelligence in Recruitment. Available at https://arts.unimelb.edu.au/__data/assets/pdf_file/0008/3440438/Sheilla-Njoto-Gendered-Bots.pdf [Last visited on 27 May 2022]
2022-05-27
CrimSAFE | Profiling potential tenants based on criminal records
N/A
United States (US)
CoreLogic
business and commerce
profiling and ranking people
compiling personal data
private sector
other kinds of discrimination

N/A

N/A
N/A

In 2018, the Connecticut Fair Housing Center and Carmen Arroyo filed a suit against CoreLogic alleging discrimination by the algorithm toward her son which denied him the opportunity to be fairly housed. The lawsuit stemmed from Carmen Arroyo's 2016 application to move her son into her apartment from a nursing home to facilitate his medical care following an accident that left him unable to care for himself. Her son, Mikhail Arroyo was rejected as the algorithm determined that he had a 'disqualifying criminal record' leaving him in the nursing home for another year. The criminal record in question was a charge from shoplifting which had been dropped and was deemed an infraction lower than a misdemeanor. The Connecticut federal District Court determined that consumer reporting agencies owe a duty not to discriminate in carrying out tenant-screening activities, specifically to persons with disabilities (National Housing Law Project, 2019).

active use

CoreLogic avails a wide array of screening tools to landlords who are looking to evaluate potential tenants. CrimSAFE assesses tenant suitability by specifically searching through their criminal records and notifying the landlords if the prospective tenant does not meet the criteria that they establish(Lecher, 2019). CoreLogic states that it uses multiple sources of criminal data to make a determination including data from most states and data from the Department of Public Safety. The algorithm does not assess risk but it either qualifies or disqualifies an applicant based on criteria set by the landlord (Pazniokas, 2019).

The use of automated background checks presents several opportunities for discriminatory outcomes. The algorithms are not transparent about the exact data or weighting systems which are used to make the determinations. The landlord is also not transparent with the prospective renter about the criteria they set to disqualify and this can lead to a landlord setting st

  • Cohen Milstein. (2022). “Comment: CoreLogic Use of Algorithms to Screen Housing Candidates Challenged in Approaching Trial,” MLex | Cohen Milstein. Available on https://www.cohenmilstein.com/update/%E2%80%9Ccomment-corelogic-use-algorithms-screen-housing-candidates-challenged-approaching-trial%E2%80%9D [Last visited on 19 May 2022]
  • Lecher C. (2019). Automated background checks are deciding who’s fit for a home. The Verge. Available on https://www.theverge.com/2019/2/1/18205174/automation-background-check-criminal-records-corelogic [last visited on 16 May 2022]
2022-05-19
Algorithm to predict the need for medical care in high-risk patients
Optum
United States (US)
Optum
social services
automating tasks
profiling and ranking people
private sector
public administration
racial discrimination

A study by Obermeyer et al. found evidence of racial bias in this algorithm: "At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%. The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise. We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts." (Obermeyer & al, 2019)

N/A
N/A

N/A

active use

The algorithm was developed by health services innovation company Optum to "hospitals identify high-risk patients, such as those who have chronic conditions, to help providers know who may need additional resources to manage their health". (Morse, 2019)

The algorithm uses historical and demographical data to predict how much is the patient going to cost the health-care system, as an intended proxy to try to identify those patients in higher risk and need for care. However, due to sociopolitical reasons, in the US African Americans have historically incurred in lower costs when compared to white people, which means that the algorithm systematically identified white people as more in need of care than black people who were actually sicker. (Obermeyer et al., 2019; Morse, 2019; Johnson, 2019).

  • Jee, C. (2019). A biased medical algorithm favored white people for health-care programs. MIT Technology Review, 25 October 2019. Available on: https://www.technologyreview.com/2019/10/25/132184/a-biased-medical-algorithm-favored-white-people-for-healthcare-programs/ [last visited on 18 May 2022]
  • Johnson, C. Y. (2019). Racial bias in a medical algorithm favors white patients over sicker black patients. The Washington Post, 24 October 2019. Available on: https://www.washingtonpost.com/health/2019/10/24/racial-bias-medical-algorithm-favors-white-patients-over-sicker-black-patients/ [last visited on 18 May 2022]
2022-05-18
Algorithm to prioritise recipients of Covid-19 vaccines
Stanford Medicine Hospital
California (US)
Stanford Medicine Hospital
labour and employment
automating tasks
profiling and ranking people
private sector
socioeconomic discrimination
other kinds of discrimination

N/A

2020-12-XX
N/A

N/A

not known

In late 2020, the Stanford Medicine Hospital developed an algorithm to automate the process of deciding who among its staff prioritise to receive the first available Covid-19 vaccines. Reportedly, the algorithm was a simple rules-based formula that computed a discrete series of factors: "It considers three categories: “employee-based variables,” which have to do with age; “job-based variables”; and guidelines from the California Department of Public Health. For each category, staff received a certain number of points, with a total possible score of 3.48. Presumably, the higher the score, the higher the person’s priority in line." (Guo & Hao, 2020)

Despite the fact that resident physicians were more in risk of getting Covid-19 by being in closer contact to Covid-19 patients, the algorithm gave them a low priority, and only seven of the 1,300 resident medical staff were selected to receive one of the first 5,000 vaccines available to Stanford Medicine. Because of the way the algorithm wa

  • Asimov, N. (2020). Stanford apologizes after doctors protest vaccine plan that put frontline workers at back of line. San Francisco Chronicle, 18 December 2020. Available on https://www.sfchronicle.com/health/article/Stanford-doctors-protest-vaccine-plan-saying-15814502.php [last visited on 13 May 2022]
  • Chen, C. (2020). Only Seven of Stanford’s First 5,000 Vaccines Were Designated for Medical Residents. ProPublica, 18 December 2020. Available on: https://www.propublica.org/article/only-seven-of-stanfords-first-5-000-vaccines-were-designated-for-medical-residents [last visited on 13 May 2022]
2022-05-13
COMPAS | Assessing the risk of prisoners' recidivism
US Criminal Courts
United States (US)
Equivant
justice and democratic processes
automating tasks
evaluating human behaviour
predicting human behaviour
profiling and ranking people
public administration
racial discrimination
gender discrimination
weakening of democratic practices

N/A

2001-XX-XX
N/A

N/A

active use

COMPAS (the acronym of Correctional Offender Management Profiling for Alternative Sanctions) is an algorithmic tool developed and launched by Northpointe (now called Equivant) in circa 1997 to asses -among other things- the risk of recidivism of prisoners, and can be used to assist judges in deciding whether and under which conditions to release a prisoner. COMPAS, which has been used in New York State since at least 2001, computes different risk factors associated to a given prisoner and produces an overall risk score of between 1 (low risk) and 10 (high risk). Throughout the years, COMPAS (as well as other similar tools) has been adopted by more and more courts in the US. (Equivant, 2019 & 2022; Angwin, 2016)

Research has shown that COMPAS tends to discriminate against people of colour by reportedly systematically assigning them higher risk scores than those given to white people under similar circumstances. A graphical example is that of an 18-year-old black woman who was arrested f

  • Angwin, J. & al. (2016). Machine Bias. ProPublica, 23 May 2016. Available on: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [last visited on 13 May 2022]
  • Equivant (2019). Practitioner's Guide to COMPAS Core. Equivant, 4 April 2019. Available on https://www.equivant.com/wp-content/uploads/Practitioners-Guide-to-COMPAS-Core-040419.pdf [last visited on 13 May 2022]
2022-05-13
NUTRI-SCORE | Algorithm to rank nutritional content of various foods
food brands
food producers
France
Germany
Belgium
Switzerland
Spain
Luxembourg
Netherlands
Public Health France
product safety
ranking bids and other content submissions
public administration
private sector
other kinds of discrimination
socioeconomic discrimination
disseminating misinformation

Nutri-Score is an open-source algorithm that's publicly available. (Santé publique France, 2021)

2017-XX-XX
N/A

N/A

active use

Nutri-Score is a nutrition labelling scheme that seeks to rank products and score them based on their nutritional composition. It was developed by the French Public Health authority and based on work done by researchers and other institutions (Santé publique France, 2022).

Nutri-Score uses a simple and open-source algorithm to give different food products a simple value in a five-point scale, from A (very good) to E (very bad), and a corresponding colour from green (A) to red (E). The algorithm measures the given amount of nutrients that "should be encouraged (fibers, proteins, fruits, vegetables, pulse, nuts, and rapeseed, walnut and olive oils)" and of "nutrients that should be limited: energy, saturated fatty acid, sugars, salt". (Santé publique France, 2021 & 2022).

  • Egnell, M & al. (2018)., Objective understanding of Nutri-Score Front-Of-Package nutrition label according to individual characteristics of subjects: Comparisons with other format labels. PLoS ONE, 13(8), 23 August 2018. Available on https://doi.org/10.1371/journal.pone.0202095 [last visited on 13 May 2022]
  • Giron, S. (2020). Le Scan : des frites "A" et de l'huile d'olive "D", peut-on se fier au Nutri-score ? RTBF, 13 January 2020. Available on https://www.rtbf.be/article/des-frites-a-et-de-l-huile-d-olive-d-peut-on-se-fier-au-nutri-score-10385161?id=10385161 [last visited on 13 May 2022]
2022-05-13
TikTok's content recommendation algorithm
TikTok (owned by ByteDance)
worldwide
TikTok (owned by ByteDance)
communication and media
making personalised recommendations
private sector
disseminating misinformation
social polarisation / radicalisation
manipulation / behavioural change
racial discrimination
other kinds of discrimination
threat to privacy

N/A

2017-XX-XX
N/A

N/A

active use

TikTok is a social media app in which users (mostly teenagers and young adults) share and watch short videos (normally between 15 and 60 seconds long), launched in 2017 by the Chinese company ByteDance as an international version of its app Douyin, which had been available in China since 2016 (Klug & al., 2021; Shu, 2020).

Like other social media apps, TikTok uses an algorithm to automate the curation of the content its users see, and by default TikTok "presents videos to the user on their ‘for you’ page as an endless, hard to anticipate flow of auto-looped videos to swipe through". (Klug & al., 2021)

  • Bandy, J. & al (2020). #TulsaFlop: A Case Study of Algorithmically-Influenced Collective Action on TikTok. FAccTRec 2020 Workshop on Responsible Recommendation, 14 December 2020. Available on: https://doi.org/10.48550/arXiv.2012.07716 [last visited on 4 May 2022]
  • Griffin, A. (2021). TikTok update: huge change to algorithm will change user's for your page. The Independent, 16 December 2021. Available on https://www.independent.co.uk/life-style/gadgets-and-tech/tiktok-update-algorithm-fyp-for-your-b1977474.html [last visited on 4 May 2022]
2022-05-12
SyRI | Detecting welfare fraud
Dutch Government
Netherlands
social services
predicting human behaviour
public administration
racial discrimination
socioeconomic discrimination

N/A

2013-XX-XX
N/A

N/A

no longer in use

The Dutch Government developed a risk indication system to detect welfare fraud. The algorithm collects data including employment records, personal debt records and previous benefits received by an individual. Additionally, the algorithm analyses education levels and housing histories to determine which individuals are at a high risk of committing benefit fraud. The deployment of the algorithm was disastrous with over 26,000 families accused of trying to commit social benefits fraud. The families were asked to return over 1 billion dollars leading many families to be pushed to financial ruin due to having to pay back the benefits.

Additionally, people of colour were disproportionately affected with many reporting lasting mental health issues. An extensive investigation from the Dutch Data Protection authority revealed that the algorithms categorized neutral factors such as dual nationality as risky and several initial complaints that were made were not properly followed up or given lit

  • Geiger, G. (2021, March 1). How a Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud. VICE. Available on https://www.vice.com/en/article/jgq35d/how-a-discriminatory-algorithm-wrongly-accused-thousands-of-families-of-fraud [last visited on 20 April 2022]
  • Heikkila, M. (2022, March 30). AI: Decoded: A Dutch algorithm scandal serves a warning to Europe — The AI Act won’t save us. POLITICO. Available on https://www.politico.eu/newsletter/ai-decoded/a-dutch-algorithm-scandal-serves-a-warning-to-europe-the-ai-act-wont-save-us-2/ [last visited on 20 April 2022]
2022-04-20
LS/CMI | Risk assessment for parole
Massachusetts Parole Board
Massachusetts (US)
MultiHealth Systems Inc
justice and democratic processes
predicting human behaviour
public administration
other kinds of discrimination

N/A

2013-XX-XX
N/A

Mr. Jose Rodriguez, in Rodriguez v Massachusetts Parole Board SJC-13197 (2021) filed a lawsuit against the Massachusetts Parole Board. The suit alleges that the denial of his application for parole, was in part, due to the reliance of a predictive algorithm used to determine the suitability of an applicant for parole, resulting in an unconstitutional use of discretion by the parole board. Particularly, Mr. Rodriguez challenged the suitability of the algorithm for his case as he was a juvenile offender and was 60 years old at the time of filing the suit. This raised concerns about the ability of the Parole Board to take into account dynamic changes in the offender's life particularly for a juvenile offender over a long period of time. The case is currently ongoing.

active use

LS/CMI is a predictive analytical tool used to predict recidivism in parole applicants. The algorithm is deployed in Massachusetts which licenses it from MultiHealth Systems, Inc. since 2013.The algorithm uses a point system to assess parole applicants according to 54 predictive factors including education level and marital status. A higher score indicates that the applicant is less likely to be approved for parole. The use of this tool is hidden from the applicants and the exact scoring system and weight given to risk factors as well as special considerations is not divulged leading to concerns about the accuracy of the tool (Suffolk University Law School, 2022).

In Mr. Rodriguez’s case, the context of his case and his assessment is at question because assessments of the algorithm indicate that it does not take into account unique circumstances of juvenile offenders. Where predictive tools are concerned, different circumstances must be weighted by the system differently to accurately

  • Casadei, B. D. (2020, March 24). Predicting prison terms and parole. Downtown publications. Available on https://www.downtownpublications.com/single-post/2020/03/24/predicting-prison-terms-and-parole [last visited on 18 April 2022]
  • EPIC. (2022). Rodriguez v. Massachusetts Parole Board. EPIC - Electronic Privacy Information Center. Available on https://epic.org/documents/rodriguez-v-massachusetts-parole-board/ [laast visited on 18 April 2022]
2022-04-18
CHINOOK | Algorithm to augment and replace human decision-making in immigration management
Canadian Government
Canada
justice and democratic processes
policing and security
automating tasks
public administration
other kinds of discrimination

N/A

N/A
N/A

N/A

being tested

The Canadian government has, for several years, been working on several algorithmic solutions to automate and speed up the processing of immigration applications. These tools have been tested with Immigration Refugees and Citizenship Canada (IRCC) confirming that automated systems are already in place to triage applications and sort them into two categories: simple cases which are processed further and complex cases which are flagged for review (Meurrens, 2021).

The algorithm used, Chinook, extracts applicant information and presents it in a spreadsheet which is assigned to visa officers who can review several applications on a single screen. It also allows the officers to create risk indicators and flag applications which need further review. (Nalbandian, 2021)

  • Adoption of AI in Immigration Raises Serious Rights Implications. Ihrp.law.utoronto.ca. Available on https://ihrp.law.utoronto.ca/news/canadas-adoption-ai-immigration-raises-serious-rights-implications [last visited on 30 March 2022]
  • Meurrens, S. (2021). The increasing role of AI in visa processing. Canadian Immigrant. Available on https://canadianimmigrant.ca/immigrate/immigration-law/the-increasing-role-of-ai-in-visa-processing [last visited on 30 March 2022]
2022-03-28
Algorithm used to determine ride-share prices
Uber
United States (US)
Uber
business and commerce
making personalised recommendations
private sector
other kinds of discrimination

N/A

N/A
N/A

N/A

active use

Price determination for ride-share applications is typically dependent on the distance to be covered and the time it will take to cover that distance. While this model is the current basis for pricing, algorithms to determine dynamic, individualized pricing have been introduced that may be discriminatory.

A study conducted on 100 million ride hailing samples from the city of Chicago showed a tendency for the price to increase in areas that had larger non-white populations, higher poverty levels, younger residents, and high education levels. The study identified that these increases occur in these demographics because they tend to use ride hailing apps more and the price increase was based on that usage. This creates a punitive effect for those who are most in need of ride hailing services and due to the demographic distribution, it can discriminate against certain demographics.

  • Lu, D. (2020). Uber and Lyft pricing algorithms charge more in non-white areas. New Scientist. Available on https://www.newscientist.com/article/2246202-uber-and-lyft-pricing-algorithms-charge-more-in-non-white-areas/ [last visited on 18 March 2022]
  • Pandey, A., & Caliskan, A. (2021). Iterative Effect-Size Bias in Ridehailing: Measuring Social Bias in Dynamic Pricing of 100 Million Rides. https://arxiv.org/pdf/2006.04599.pdf [last visited on 18 March 2022]
2022-03-28
Epic Deterioration Index | Triaging patients to determine level of care
EPIC
United States (US)
EPIC
social services
profiling and ranking people
automating tasks
private sector
other kinds of discrimination

N/A

2020-XX-XX
2021-XX-XX

N/A

active use

The deterioration index is a tool used to triage patients, that is to determine how much care and treatment a patient needs based on how sick they are. The index seeks to replace the traditional triage system, done by medical staff, with an automated system that collects data from patients and makes determinations. At the initial stages of the Covid-19 pandemic, the Electronic Health Record Company released the tool to assist doctors to distinguish between low-risk and high-risk patients for transfer to the ICU. The algorithm uses different risk factors such as age, chronic conditions, obesity and previous hospitalizations.

The algorithm was deployed rapidly to America’s largest health care systems however, it had not been independently validated prior to its deployment. The risks associated with deploying triaging algorithms can be severe as a bias in the system can mean that patients do not get adequate healthcare. Concerns have been made about the lack of transparency of the model t

  • Khetpal, V., & Shah 05.27.2021, N. (2021). Amid a Pandemic, a Health Care Algorithm Shows Promise and Peril. Undark Magazine. Available on  https://undark.org/2021/05/27/health-care-algorithm-promise-peril/ [last visited on 23 March 2022]
  • Shah, V. K. and N. (2021, May 28). How a largely untested AI algorithm crept into hundreds of hospitals. Fast Company. Available on  https://www.fastcompany.com/90641343/epic-deterioration-index-algorithm-pandemic-concerns [last visited on 24 March 2022]
2022-03-24
Algorithm to profile employees based on productivity
Amazon
United States (US)
Amazon
labour and employment
evaluating human behaviour
profiling and ranking people
private sector
other kinds of discrimination

N/A

N/A
N/A

N/A

active use

Amazon uses a proprietary productivity tool to assess whether its employees are meeting their work quotas. The system tracks the rate of each individual’s productivity and it generates automated responses about the quality or productivity of workers and delivers warnings or terminations to employees.

The system has led to the firing of hundreds of workers for several infractions and has faced scrutiny as well as labour disputes due to the system being excessively punitive. For workers, this has meant missing crucial breaks such as bathroom breaks and neglecting physical and mental health to keep up with their productivity quotas. Though supervisors are able to override the system, the system is relied on as a primary decision maker. 

  • Futurism (2019). Amazon Used an AI to Automatically Fire Low-Productivity Workers. Futurism. Available on  https://futurism.com/amazon-ai-fire-workers [last accessed on 24 March 2022]
  • Lecher, C. (2019). How Amazon automatically tracks and fires warehouse workers for “productivity. The Verge. Available on  https://www.theverge.com/2019/4/25/18516004/amazon-warehouse-fulfillment-centers-productivity-firing-terminations [last accessed on 24 March 2022]
2022-03-24
SIMPLIFIED ARRIVAL | Facial recognition at US border-crossing points
US Customs and Border Protection
United States (US)
US Customs and Border Protection
policing and security
automating tasks
identifying images of faces
public administration
racial discrimination
gender discrimination
threat to privacy
state surveillance

N/A

2018-XX-XX
N/A

In 2017, the Electronic Privacy Information Center (EPIC) sued the US Customs and Border Protection (CBP) after the CBP had failed to comply with EPIC's Freedom of Information Act requests for information about the CBP's Biometric Entry-Exit Program", of which the Simplified Arrival algorithmic system is part. EPIC, a non-profit research and advocacy centre based in Washington, DC, in the US, has been publishing on its website the documents concerning this case. (EPIC, 2019)

active use

The US Custom and Border Protection (CBP) developed the Simplified Arrival algorithmic system as part of its use of biometrics and has been using it since at least 2018. The idea of Simplified Arrival is to increase border security while --as the name says-- simplifying the process of identifying people crossing a border to enter the US. (CBP, 2022; Glusac, 2022) To achieve that, the CBP "selected facial recognition, which uses a computer algorithm to compare a picture taken in person at airport immigration or another border checkpoint to the traveler’s passport picture or visa". (Glusac, 2022)

Representatives of the CBP have insisted that the algorithm merely checks whether the picture taken of the person who aims to enter the US matches their picture on their passport and on any other ID documents they might have submitted during the visa process. The border security person also makes a manual verification and talks to the person aiming to enter the US, and in the end any decision i

  • Burt, C. (2021). Simplified Arrival face biometrics reaches 5 new ports, faces questions. Biometric Update, 1 December 2021. Available on https://www.biometricupdate.com/202112/simplified-arrival-face-biometrics-reaches-5-new-ports-faces-questions [last visited on 17 March 2022]
  • CBP (US Customs and Border Protection) (2018). Simplified Arrival fact sheet. CBP.gov, August 2018. Available on https://www.cbp.gov/sites/default/files/assets/documents/2018-Aug/Simplified_Arrival_Fact_Sheet.pdf [last visited on 17 March 2022]
2022-03-21
IATOS | Algorithm to detect Covid-19 through audio recording of coughing
Buenos Aires Municipality
Buenos Aires (Argentina)
Buenos Aires Municipality
social services
recognising sounds
public administration
disseminating misinformation

N/A

2022-XX-XX
N/A

N/A

active use

IATos (which could be translated as AICough from Spanish) is an algorithmic system designed to detect whether a person has Covid-19 by analysing an audio file of that person coughing. (Buenos Aires, 2022a)

The Buenos Aires Municipality developed IATos during 2021 and has been implementing it since February 2022. The algorithmic system is available to users through a WhatsApp bot also managed by the Buenos Aires Municipality. People can record themselves coughing and send the audio file to the chatbot on WhatsApp, which then automatically responds to that message recommending people to get tested or not, depending on whether the algorithm considers the person as suspicious of having Covid-19. (Buenos Aires, 2022a).

  • Bär, N. (2022). Controversia por el programa automático de reconocimiento de la tos que implementó la ciudad. El Destape, 16 February 2022. Available on https://www.eldestapeweb.com/nota/controversia-por-el-sistema-automatico-de-reconocimiento-de-tos-que-implemento-la-ciudad-20222160558 [last visited on 15 March 2022]
  • Buenos Aires municipality (2021a). IATos. Herramienta experimental de pre-screening para COVID-19. IA + Tos. Buenos Aires municipality, May 2021. Available on https://www.buenosaires.gob.ar/sites/gcaba/files/casoiatos_completo-2021_1.pdf [last visited on 15 March 2022]
2022-03-17
TENSOR | Early detection of terrorist activities online
Law Enforcement Agencies in Europe
European Union (EU)
Nothern Ireland (UK)
Greece
Germany
United Kingdom (UK)
Spain
Belgium
Catalonia (Spain)
France
Italy
TENSOR Consortium
policing and security
automating tasks
compiling personal data
language analysis
recognising images
recognising sounds
evaluating human behaviour
predicting human behaviour
simulating human speech
public administration
private sector
state surveillance
threat to privacy
weakening of democratic practices

Even though the little public information available about TENSOR says that any systems developed would have "built-in privacy and data protection" (TENSOR, 2019; Akhgar, 2017), we haven't been able to find any kind of external audit that could confirm such protection.

2016-XX-XX
N/A

N/A

not known

TENSOR was a project developed by a consortium of the same name between 2016 and 2019, and funded by the European Commission's Horizon2020 programme. (TENSOR, 2019)

The general objective of the project was to develop a digital platform equipped with algorithmic systems for law enforcement agencies across Europe to detect as early as possible terrorist activities, radicalisation and recruitment online. (TENSOR, 2019)

  • Akhgar, B. & al. (2017). TENSOR: retrieval and analysis of heterogeneous online content for terrorist activity recognition. Proceedings of the Estonian Academy of Security Sciences, Number 16. 2017. Available on https://digiriiul.sisekaitse.ee/handle/123456789/2001 [last visited on 14 March 2022]
  • AlgorithmWatch. (2019). Automating Society 2019. AlgorithmWatch, January 2019. Available on https://algorithmwatch.org/en/automating-society-2019/ [last visited on 14 March 2022]
2022-03-14
Algorithm used to enhance security using recognition technology
EU border control agencies
European Union (EU)
United States (US)
worldwide
IDEMIA
policing and security
recognising facial features
private sector
threat to privacy
racial discrimination

Yes. In 2019, the company was audited by the National Institute of Standards and Technology (NIST) and was ranked first of 75 tested systems with a positive identification rate of 99.5% (National Institute of Standards and Technology, 2019). Despite the fact that it had the highest rate of accuracy it had a false match rate in black women that was 10 times higher than that of white women (Simonite, 2019). The company was audited again in 2021 and was found to have maintained its position as having the highest accuracy rate (NIST, 2022)

N/A
N/A

N/A

active use

IDEMIA is a technology company based in France primarily dealing in recognition and identification services. The company’s recognition systems are used in numerous countries in the US, Australia, the European Union, West Africa, among others. IDEMIA provides several recognition systems which track faces, vehicles, number plates and various objects (Schengen Visa Info News, 2021).

The system has no restrictions on usage, including private and public spaces and particularly is included as a tool by law enforcement(Burt, 2020). This poses several risks particularly in the absence of legislation that prohibits the excessive use of recognition systems. This can mean increased mass surveillance and violation of privacy rights.

  • National Institute of Standards and Technology. (2022, March 1). FRVT 1:N Identification. Pages.nist.gov. Available on https://pages.nist.gov/frvt/html/frvt1N.html [last visited on 7 April 2022]
  • Schengen Visa Info News. (2021, April 8). IDEMIA: Biometric Entry-Exit System to Meet EU Border Laws. SchengenVisaInfo.com. Available on https://www.schengenvisainfo.com/news/idemia-biometric-entry-exit-system-to-meet-eu-border-laws/ [last visited on 8 April 2022]
2022-03-11
AMS Algorithm | Profiling job seekers to increase effectiveness of labour market programs
The Public Employment Service Austria
Austria
N/A
labour and employment
profiling and ranking people
public administration
gender discrimination
socioeconomic discrimination

N/A

N/A
N/A

N/A

being tested

The Austrian Public Employment Service (AMS) announced plans to roll out an algorithmic profiling system for job seekers in 2018. The system was introduced with three main goals in mind:

  • Increasing the efficiency and effectiveness of counselling
  • Allhutter, D & al. (2020). Algorithmic Profiling of Job Seekers in Austria: How Austerity Politics Are Made Effective. Frontiers in Big Data. Available on https://doi.org/10.3389/fdata.2020.00005 [last visited on 6 March 2022]
  • Wimmer, B (2018). AMS-Chef: “Mitarbeiter schätzen Jobchancen pessimistischer ein als der Algorithmus.” Futurezone. Available on https://futurezone.at/netzpolitik/ams-chef-mitarbeiter-schaetzen-jobchancen-pessimistischer-ein-als-der-algorithmus/400143839 [last visited on 7 March 2022]
2022-03-09
Algorithm to determine car insurance premiums
Allstate
United States (US)
N/A
business and commerce
profiling and ranking people
making personalised recommendations
private sector
socioeconomic discrimination
gender discrimination
other kinds of discrimination

N/A

N/A
N/A

The California Department of Insurance and Consumer Watchdog filed a complaint with a California administrative law judge, alleging unfair price discrimination and manipulation of the algorithm to drive up prices or drive down discounts. The company denies the allegations and the case is pending.

active use

Allstate is an auto insurance company covering a wide range of policies. The company utilizes a ‘price optimisation’ algorithm which aims at developing personalised pricing for customers. The algorithm works by determining a price based on factors other than risk. The exact functioning of the algorithm was largely unknown until a 2013 rate filing submitted for approval to Maryland regulators was disapproved, which disclosed significant aspects of the way the algorithm works. The company intended to improve its rating plan by including algorithms to adjust customer rates (Sankin, 2022).

To determine the new rates, customers were divided into micro-segments based on several criteria such as birthdate, gender and zip-code. Researchers determined that the algorithm would fairly target non-white customers as well as customers between 42 and 62 years of age marking an increase of between 5 and 20 percent. The company used this model to determine pricing in several states and has faced numer

  • Bilal. (2020). How researchers analyzed Allstate’s car insurance algorithm. ISPS News. Available on https://ipsnews.net/business/2020/07/06/how-researchers-analyzed-allstates-car-insurance-algorithm/ [last visited on 8 March 2022]
  • Sankin, A. (2022). Newly Public Documents Allege Allstate Overcharged Loyal California Customers $1 Billion. Consumer Watchdog. Available on https://consumerwatchdog.org/news-story/newly-public-documents-allege-allstate-overcharged-loyal-california-customers-1-billion [last visited on 8 March 2022]
2022-03-09
SAFERENT | Screening prospective tenants
SafeRent Solutions
United States (US)
SafeRent Solutions
business and commerce
compiling personal data
predicting human behaviour
profiling and ranking people
automating tasks
private sector
gender discrimination
racial discrimination
other kinds of discrimination

N/A

N/A
N/A

N/A

active use

SafeRent is an algorithmic system developed by US data firm CoreLogic (today marketed under the brand SafeRent Solutions) and in use in the US since at least 2013 (CoreLogic, 2013).

SafeRent is used to produce screening reports of prospective tenants for landlords, and the algorithm works by collecting different kinds of data about people from different sources and estimating a "SafeRent Score" between 200 ("Unlikely Candidate") and 800 ("Best Candidate") by trying to predict how a prospective tenant would behave. (SafeRent, 2022)

  • CoreLogic (2013). Resident Screening. The Industry Benchmark for Applicant Screening and Risk Reduction. CoreLogic SafeRent, 2013. Available on https://www.corelogic.com/downloadable-docs/analytic-decision.pdf [last visited on 8 March 2022]
  • Leiwant, M.H. (2022). Locked Out: How Algorithmic Tenant Screening Exacerbates the Eviction Crisis in the United States. Georgetown Law Technology Review, February 2022. Available on https://georgetownlawtechreview.org/locked-out-how-algorithmic-tenant-screening-exacerbates-the-eviction-crisis-in-the-united-states/GLTR-02-2022/ [last visited on 8 March 2022]
2022-03-08
Instagram recommendation algorithm
Instagram (owned by Meta)
worldwide
Instragram (owned by Meta)
communication and media
predicting human behaviour
automating tasks
making personalised recommendations
private sector
racial discrimination
other kinds of discrimination

N/A

2016-XX-XX
N/A

N/A

active use

Instagram launched in 2010 as a single stream of photos in chronological order. Facebook bought Instagram in 2012 (today they both are owned by Meta). By 2016, Instagram had introduced an algorithmic system that tried to rank and show posts in a personalised way to users, based on what they would find most interesting. (Mosseri, 2021)

These days, different parts of the Instagram app (Feed, Explore, Reels...) use different algorithms to offer different kinds of personalised content to their users. (Mosseri, 2021)

  • Amin, F. (2021). The growing criticism over Instagram's algorithm bias. Toronto City News, 5 April 2021. Available on https://toronto.citynews.ca/2021/04/05/the-growing-criticism-over-instagrams-algorithm-bias/ [last visited on 28 February 2022]
  • BBC (2020). Facebook and Instagram to examine racist algorithms. BBC News, 22 July 2020. Available on https://www.bbc.com/news/technology-53498685 [last visited on 28 February 2022]
2022-02-28
PUBLIC SAFETY ASSESSMENT | Pretrial Release risk assessment tool
N/A
United States (US)
The Laura and John Arnold Foundation
justice and democratic processes
predicting human behaviour
profiling and ranking people
public administration
other kinds of discrimination

N/A

2018-XX-XX
N/A

N/A

active use

Public Safety Assessment (PSA) is a pretrial risk assessment tool developed by the Laura and John Arnold Foundation, designed to assist judges in deciding whether to detain or release a defendant before trial. The algorithm is used in several jurisdictions and includes three different risk assessment algorithms(Milgram et al., 2015). The algorithms determine a risk score based on nine factors:

  •  Age
  • Advancing Pretrial Policy and Research (n.d.) How It Works. (APPR). Available on https://advancingpretrial.org/psa/factors/ [last visited on 26 February 2022]
  • Milgram, A., Holsinger, A. M., Vannostrand, M., & Alsdorf, M. W. (2015). Pretrial Risk Assessment. Federal Sentencing Reporter. Available on https://doi.org/10.1525/fsr.2015.27.4.216 [last visited on 26 February 2022]
2022-02-28
Algorithm to determine pre-trial risk for immigrant arrestees
U.S. Immigration and Customs Enforcement
United States (US)
N/A
policing and security
profiling and ranking people
automating tasks
public administration
other kinds of discrimination

N/A

2013-XX-XX
N/A

The New York Civil Liberties Union (NYCLU) and Bronx Defenders filed a class action suit against ICE challenging the algorithm’s fairness in light of the uptick of the algorithm’s automatic determination of detention regardless of mitigating factors that would warrant release. The NYCLU confirmed that the algorithm had been manipulated through data that it received from a Freedom of Information Act Lawsuit. The lawsuit also alleges that the detainees have had their rights to due process violated because they were not informed that the algorithm was being used to determine their release and were not given an opportunity to challenge it. The lawsuit is intended to challenge the usage of the algorithm and restore a hearing process for detainees to determine their release. The suit is pending (Bloch-Wehba, 2020).

active use

As part of its mandate, the U.S. Immigration and Customs Enforcement (ICE) determines whether a person arrested for an immigration offence will be detained or released on bond or simply released pending trial. ICE deploys an algorithm to determine which option to select for arrestees and the algorithm has been in use since 2013. The algorithm works by reviewing the detainee’s history including factors like their likelihood to be a threat to public safety and their likelihood to be a flight risk. The algorithm then recommends one out of four options: detention without bond, detention with the possibility of release on bond, outright release or to defer the decision to a human ICE supervisor. (Biddle, 2020)

In 2015 and 2017, changes were observed in the algorithm which indicated that the algorithm could no longer recommend that the arrestee be released. Whereas between 2013 and 2017 the algorithm recommended around 47 percent of arrestees be released, after June 2017, this number showed

  • Biddle, S. (2020). ICE’s New York Office Uses a Rigged Algorithm to Keep Virtually All Arrestees in Detention, the ACLU Says It’s Unconstitutional. The Intercept. Available on https://theintercept.com/2020/03/02/ice-algorithm-bias-detention-aclu-lawsuit/ [last visited on February 20, 2022]
  • Bloch-Wehba, H. (2020). Perspective | A lawsuit against ICE reveals the danger of government-by-algorithm. Washington Post. Available on https://www.washingtonpost.com/outlook/2020/03/05/lawsuit-against-ice-reveals-danger-government-by-algorithm/ [last visited on February 23 2022]
2022-02-27
HIREVUE | Profiling employment candidates to select the best hires
N/A
United States (US)
Hirevue
labour and employment
evaluating human behaviour
recognising facial features
private sector
other kinds of discrimination

Yes. Hirevue conducted two audits on its algorithm and made the results public. In December 2020, a report of Hirevue's algorithm was released by O'Neil Risk Consulting and Algorithmic Auditing (ORCAA). The algorithm was audited for fairness and bias concerns specifically assessing four areas of concern:

  • Potential differences in competency scores across different demographics
2004-XX-XX
N/A

In 2019, the Electronic Privacy Information Center filed a complaint against Hirevue with the Federal Trade Commission (FTC) alleging unfair practices. Hirevue conducted an independent audit of its own with the results being largely positive of the company and flagging bias related problems such as the assessment of candidates with different accents. In response to concerns raised about the algorithm, the company announced that the visual aspect of the product was discontinued in March 2020 stating that they found the visual analysis did not add value to the assessments. 

active use

Hirevue is a face scanning algorithm which targets jobseekers on behalf of prospective employers. The algorithm uses face scanning and analyzes their facial movements, their words (word choice and word complexity are assessed) and manner of speaking. The algorithm is trained on data collected from previous successful employees The algorithm analyzes the data for suitability and thereafter compares it to the two highest performing employees in the company. Lastly, the algorithm creates a measurement out of 100 for the suitability of the candidate based on the previous assessments.

The employer can then choose to advance the applications of those with a score above a determined threshold or automatically reject candidates who score below a determined threshold. The algorithm has been relied on by more than 250 companies and has been the subject of scrutiny as its shortcomings are detrimental to the livelihoods of applicants and risks re-enforcing existing biases in hiring.

  • Engler, A. (2019). For some employment algorithms, disability discrimination by default. Brookings. Available on https://www.brookings.edu/blog/techtank/2019/10/31/for-some-employment-algorithms-disability-discrimination-by-default/ [last visited on 24 February 2022]
  • Harwell, D. (2019). A face-scanning algorithm increasingly decides whether you deserve the job. The Washington Post. Available on  https://www.washingtonpost.com/technology/2019/10/22/ai-hiring-face-scanning-algorithm-increasingly-decides-whether-you-deserve-job/ [last visited on 24 February 2022]
2022-02-27
FRANK | Assessing drivers who can provide food delivery services
Deliveroo
worldwide
N/A
labour and employment
profiling and ranking people
automating tasks
private sector
socioeconomic discrimination
other kinds of discrimination

N/A

2013-XX-XX
2021-XX-XX

A lawsuit was filed against the company by a group of Deliveroo riders, backed by an Italian court found that Deliveroo had discriminated against its workers by failing to include, as part of its ranking algorithm, legally protected reasons for withholding labour. The company was fined 50,000 euro and a representative for the company stated that the algorithm had been discontinued. (Geiger 2021)

no longer in use

The food delivery platform delivery, Deliveroo, used an algorithm to evaluate drivers on its platforms. The algorithm would rank the driver based on several measures to determine the rider’s ability to deliver the food, and it included as part of its ranking data points on cancellations or a failure to begin a pre-booked shift. The results of the ranking meant that the app would provide drivers with a higher ranking more shifts in busier time blocks, which directly impacted their earning capacity.

The algorithm, however, was not developed to recognize mitigating circumstances that might led to a cancellation, for instance medical and health reasons and other categories of legally protected reasons for withholding labour.

  • Geiger, G. (2021). Court Rules Deliveroo Used “Discriminatory” Algorithm. Vice, 5 January 2021. Available on https://www.vice.com/en/article/7k9e4e/court-rules-deliveroo-used-discriminatory-algorithm [last visited on 23 February 2022]
  • Keane, J. (2021). Deliveroo Rating Algorithm Was Unfair To Riders, Italian Court Rules. Forbes, 5 January 2021. Available on https://www.forbes.com/sites/jonathankeane/2021/01/05/italian-court-finds-deliveroo-rating-algorithm-was-unfair-to-riders/?sh=46bfa0bd22a1 [last visited on 23 February 2022]
2022-02-27
FACEBOOK AD | Deciding which ads to show to whom, when, how and for how long
Facebook (owned by Meta)
worldwide
Facebook (owned by Meta)
communication and media
business and commerce
automating tasks
ranking bids and other content submissions
making personalised recommendations
private sector
gender discrimination
racial discrimination

There have been attempts to do external audits on the algorithm, which in effect for outsiders is like a black box.

A paper published in 2021 described the results of an audit for "discrimination in algorithm delivering job ads". The authors said their audit confirmed "skew by gender in ad delivery on Facebook". (Imana & al., 2021)

2004-XX-XX
N/A

N/A

active use

Facebook (since 2021 renamed as Meta) was initially launched in February 2004, and already since then the website used advertising to generate income. (Kirkpatrick, 2010). Over the years, Facebook's advertisement business has grown much more sophisticated as well as the main source of Facebook's income, and these days the company uses an algorithmic system to run it. To outsiders, the algorithms are a black box: its internal working is not known.

Generally speaking, there are two steps in how the Facebook Ad algorithm works. First, advertisers decide which "segments of the Facebook population to target" (Biddle, 2019), like "American men who like pizza and beer" or "Spanish young women interested in sports and organic food".

  • Ali, M. & al. (2019). Discrimination through Optimization: How Facebook's Ad Delivery Can Lead to Biased Outcomes. Proceedings of the ACM on Human-Computer Interaction. Volume 3, issue CSCW, November 2019, article N. 199, pp 1–30. Available on https://doi.org/10.1145/3359301 [last visited on 24 February 2022]
  • Angwin, J. & al. (2017). Facebook (Still) Letting Housing Advertisers Exclude Users by Race. ProPublica, 21 November 2017. Available on https://www.propublica.org/article/facebook-advertising-discrimination-housing-race-sex-national-origin [last visited on 24 February 2022]
2022-02-24
BART | Curating and recommending music and podcasts
Spotify
worldwide
Spotify
communication and media
language analysis
compiling personal data
evaluating human behaviour
predicting human behaviour
making personalised recommendations
private sector
other kinds of discrimination
disseminating misinformation
manipulation / behavioural change

N/A

2008-XX-XX
N/A

N/A

active use

Spotify launched in 2008 as a music online streaming service. Like other such services, Spotify developed and started using a recommendation algorithm, today known as BaRT (Bandits for Recommendations as Treatments). The aim of the algorithm is to learn what users like and what they may like, and offer different kinds of personalised and curated recommendations, so that the user can keep listening to music they like and stay on the Spotify platform. (Marius, 2021)

The work of the algorithm can be clearly seen on the Spotify home screen, which is full of different categories of content being recommended to the user, from the songs they play the most, to the ones they were recently listening to, to new ones that the algorithm thinks the user will like. In 2015 Spotify added podcasts to its streaming services (Bizzaco & al., 2021), and since then podcasts are also curated and recommended by Spotify's algorithm.

  • Andrews, T.M. & De Vynck, G. (2022). Why artists are leaving Spotify. The Washington Post, 8 February 2022. Available on https://www.washingtonpost.com/arts-entertainment/2022/02/03/why-artists-leaving-spotify/ [last visited on 23 February 2022]
  • Bizzaco, M. & al. (2021). Apple Music vs. Spotify. Digital Trends, 19 May 2021. Available on https://www.digitaltrends.com/music/apple-music-vs-spotify/ [last visited on 23 February 2022]
2022-02-23
Algorithm to predict the likelihood of antisocial behaviour
Bristol City Council
Bristol (United Kingdom)
N/A
policing and security
predicting human behaviour
profiling and ranking people
public administration
threat to privacy
socioeconomic discrimination
state surveillance

N/A

2019-XX-XX
N/A

N/A

active use

In 2019, Bristol’s City Council in the UK began to make use of an algorithm which would compile data from local authorities, including the police and the NHS, to predict the likelihood of antisocial behavior, such as drug use and domestic violence in around 50,000 families across Bristol. The algorithm was developed in-house by a data analytics hub, Insight Bristol (Booth, 2019).

The algorithm relies on data from around 30 different public sector resources, including Social care systems, the Department for Work and Pensions and the Avon and Somerset Constabulary. (Booth, 2019)

  • Bristol City Council (2019). Insight Bristol and the Think Family Database. Bristol.gov.uk. Available on https://www.bristol.gov.uk/policies-plans-strategies/the-troubled-families-scheme [last visited on 21 February 2022]
  • Bristol City Council (2019). Early Intervention and Targeted Support Privacy Notice. Bristol.gov.uk. Available on https://www.bristol.gov.uk/documents/20182/2609028/Think+Family+Privacy+Notice+v1.02.pdf/cab81fb5-976e-835d-ab9e-c7ce2c0864ce [last visited on 21 February 2022]
2022-02-10
Algorithm to determine eligibility for medical benefits
Arkansas State
United States (US)
N/A
social services
automating tasks
public administration
socioeconomic discrimination

N/A

2016-XX-XX
N/A

N/A

active use

Prior to 2016, the amount of Medicaid benefits allocated to persons in need in the State of Arkansas was assessed by people. Assessors would visit the person and interview them, determining how caretakers would be assigned. In 2016, the use of human assessors was swapped out in the State of Arkansas for an algorithmic tool which determines how many hours of help persons in need are allocated. The algorithm receives data form an assessor who meets with the beneficiary. The information consists of data pooled from 200 questions asked of the beneficiary. The algorithm then places the beneficiary in three tiers of care with the first tier being the lowest form of support, often services in an outpatient clinic, the second tier providing more assistance and a wider range of services and the third tier providing additional supports which can mean inpatient treatment or 24 hour paid support (Arkansas Advocates for Children and Families, 2018).

These algorithms are not only dehumanizing, redu

  • Brown, L. & al. (2020). Report: Challenging the Use of Algorithm-driven Decision-making in Benefits Determinations Affecting People with Disabilities. Center for Democracy and Technology. October 2020. Available on https://cdt.org/insights/report-challenging-the-use-of-algorithm-driven-decision-making-in-benefits-determinations-affecting-people-with-disabilities [Last visited on 3 February 2022]
  • Arkansas Advocates for Children and Families 2018. Transforming Medicaid in Arkansas: an early look at the PASSE program. Arkansas Advocates for Children and Families. Available on http://www.aradvocates.org/wp-content/uploads/AACF-PASSE.webfinal.12.10.18.pdf [last visited on 29 February 2022]
2022-02-10
DANTE | Detecting and analysing terrorist-related online content and financing ­activities
N/A
European Union (EU)
DANTE Consortium
policing and security
automating tasks
evaluating human behaviour
language analysis
predicting human behaviour
compiling personal data
public administration
racial discrimination
other kinds of discrimination
state surveillance
threat to privacy
weakening of democratic practices

We haven't found any information about the DANTE algorithms having been externally and independently audited.

N/A
N/A

N/A

not known

DANTE was a project funded by the European Commission as part of its Horizon2020 Programme, and which seems to have been active between 2016 and 2019. DANTE was developed by a consortium formed by private organisations and public bodies from EU members Italy, Spain, Belgium, Portugal, Germany, France, Ireland, Greece and Austria, as well as from the UK, which during that period was still an EU member. (DANTE Consortium, 2019).

DANTE's aim was "to deliver more effective, efficient, automated data mining and analytics solutions and an integrated system to detect, retrieve, collect and analyse huge amount of heterogeneous and complex multimedia and multi-language terrorist-related contents, from both the Surface and the Deep Web, including Dark nets". (DANTE Consortium, 2019)

  • DANTE Consortium (2019). DANTE - Detecting and analysing terrorist-related online contents and financing activities. DANTE Project website. Available on https://www.h2020-dante.eu [last visited on 8 February 2022]
  • Penner, K. (2020). "European Union". Automating Society 2019 Report. AlgorithmWatch. Available on https://algorithmwatch.org/en/automating-society-2019/european-union [last visited on 8 February 2022]
2022-02-08
RADAR-rechts | Assessing the risk of violence by far-right extremists
German Federal Criminal Police
Germany
German Federal Criminal Police
policing and security
predicting human behaviour
profiling and ranking people
public administration
other kinds of discrimination

N/A

2021-XX-XX
N/A

N/A

being tested

After two violent attacks by right-wing extremists in 2019 in Germany, the German Ministry of Interior announced a series of measures, including the development of an algorithmic risk assessment tool to try to identify potentially violent right-wing extremists. (Flade, 2021)

At the beginning of 2020, the German Federal Criminal Police (BKA) started developing the algorithm, called RADAR-rechts (RADAR-right). First, a review of the existing literature was done, and experts were consulted. And RADAR-rechts was at least partially based on a similar algorithm that the German Police has been using since 2017 to assess the risk of violent behaviour by Islamist extremists. (Flade, 2021)

  • Flade, F. (2021). Mit RADAR gegen Rechtsterroristen? Tagesschau, 17 November 2021. Available on https://www.tagesschau.de/investigativ/wdr/radar-bka-rechtsextremismus-101.html [last visited on 31 January 2022]
2022-01-31
Algorithm to assign a welfare fraud risk score
Rotterdam Municipality
Rotterdam (Netherlands)
Netherlands
N/A
social services
compiling personal data
predicting human behaviour
profiling and ranking people
public administration
racial discrimination
socioeconomic discrimination
other kinds of discrimination

Investigative collective Lighthouse Reports and the Dutch public broadcaster, the VPRO, conducted a partial audit of the algorithm in 2021. The Rotterdam municipality "voluntarily disclosed some of the code and technical information" but not other required details, like training data. (Lighthouse Reports, 2021)

Lighthouse Reports "data scientists have so far conducted a partial audit based on the disclosed material evaluating data quality, reliability, transparency, and accuracy". (Lighthouse Reports, 2021).

N/A
N/A

N/A

active use

Since at least 2018, the Rotterdam Municipality has been using an algorithm to estimate the risk of welfare recipients committing fraud. It's not known who developed the algorithm. (Lighthouse Reports, 2021)

The algorithm is fed personal details about every welfare recipient, "from address to mental health history to hobbies", which result in each person being assigned a particular fraud risk score. Based on that score, the Rotterdam Municipality has since 2018 put "thousands of welfare recipients" under investigation, and "hundreds have had their benefits terminated". (Lighthouse Reports, 2021)

  • Argos (2021). In het vizier van het algoritme. VPRO Argos, 18 December 2021. Available on https://www.vpro.nl/argos/media/luister/argos-radio/onderwerpen/2021/In-het-vizier-van-het-algoritme-.html [last visited on 28 January 2022]
  • Lighthouse Reports (2021). Unlocking a welfare fraud prediction algorithm. Lighthouse Reports, 18 December 2021. Available on https://www.lighthousereports.nl/investigation/unlocking-a-welfare-fraud-prediction-algorithm [last visited on 28 January 2022]
2022-01-28
Algorithm to estimate a person's age by checking their face
Co-op (UK supermarket)
Tesco (UK supermarket)
Asda (UK supermarket)
Aldi (UK supermarket)
Morrisons (UK supermarket)
United Kingdom (UK)
Yoti
business and commerce
recognising facial features
profiling and ranking people
private sector
threat to privacy
racial discrimination

Yoti says that its "age estimation technology has been certified by an independent auditor for use in a Challenge 25 policy area and has been found to be at least 98.86 percent reliable". (Yoti, 2022).

However, there are no other public details of that.

2022-XX-XX
N/A

N/A

being tested

In March 2021, the Home Office and the Office for Product Safety and Standards of the British government launched a pilot project for "retailers, bars and restaurants" to trial technology to carry out age verification checks. (UK Home Office & Baroness Williams of Trafford, 2021).

Then, after some delay, several British supermarkets announced that they will start trialling age verification software developed by technology company Yoti, in a phase that will last from January to May 2022.

  • UK Home Office & Baroness Williams of Trafford (2021). Age verification technology to be trialled in shops, bars and restaurants. Gov.uk, 18 March 2021.Available on https://www.gov.uk/government/news/new-age-verification-technology-to-be-trialled-in-shops [last visited on 28 January 2022]
  • Williams, R. (2022). Co-op, Tesco, Asda, Aldi and Morrisons supermarkets to trial facial age estimation tech for buying alcohol. inews.co.uk, 17 January 2022. Available on https://inews.co.uk/news/technology/co-op-tesco-asda-aldi-morrisons-supermarkets-trial-facial-age-estimation-tech-buying-alcohol-1406501 [last visited on 28 January 2022]
2022-01-28
SEND@ | Profiling people to offer relevant employment search results
Spanish Public Employment Service
Spain
Spanish Public Employment Service
labour and employment
profiling and ranking people
evaluating human behaviour
making personalised recommendations
public administration
gender discrimination
religious discrimination
other kinds of discrimination
socioeconomic discrimination
racial discrimination

It hasn't been independently audited. But the OECD showed interest in Send@ and is running an evaluation of its performance in collaboration with the Public Employment Service in Spain (SEPE), which plans to hand in the final report in June 2022. (Expansion, 2021)

2021-XX-XX
N/A

N/A

active use

The Public Employment Service in Spain (SEPE) reportedly developed Send@ (Luengo, 2021), and has been using it since 2021 with the aim of helping job-seekers by generating customised job advice according to the job-seeker's data.

Public servant at SEPE using Send@ introduce the new job-seeker's details in the system, and then the Send@ algorithm looks among former job-seeker for profiles that are similar to that of the new job-seeker, selects those who went on to have the best careers, and checks which actions those people followed. (Expansion, 2021)

  • Expansion (2021). Un algoritmo del SEPE para mejorar la búsqueda de empleo genera interés entre las empresas. Expansion, 18 July 2021. https://www.expansion.com/economia/2021/07/18/60f4110be5fdea83168b45ab.html [last visited on 19 January 2022]
  • JOBinplanet (2021). Send@: Digitalización y uso masivo de datos para ayudar a encontrar trabajo - SEPE x JOBMadrid'20. JOBinplanet, 12 January 2021. Available as a video on YouTube, https://www.youtube.com/watch?v=TNKVWL0pFRU [last visited on 19 January 2022]
2022-01-20
House price-estimating algorithm (iBuying)
Zillow
United States (US)
Zillow
business and commerce
predicting human behaviour
automating tasks
private sector
socioeconomic discrimination

N/A

2006-XX-XX
2021-11-02

N/A

no longer in use

Zillow is an online real estate marketplace that allows users to virtually view homes for sale across the globe. An arm of the company, Zillow Offers developed an algorithm to estimate the cost of homes with a view to creating savings for Zillow. The algorithm failed in its ambit and instead overestimated the value of homes that were purchased by Zillow leading to a loss of more than $500 million (Stokel, 2021).

The failures of such an algorithm have effects on the ability purchase property as it renders housing availability subject to several fluctuations in misunderstood algorithmic models (Parker & Putzier, 2021). Despite Zillow reformulating the functionality of its model, similar models are deployed across the world with potentially devastating consequences.

  • Clark, P. (2021) Zillow’s Algorithm-Fueled Buying Spree Doomed Its Home-Flipping Experiment. Bloomberg Businessweek, 8 November 2021. https://www.bloomberg.com/news/articles/2021-11-08/zillow-z-home-flipping-experiment-doomed-by-tech-algorithms [last visited on 20 January 2022]
  • Cook, J. (2021) Why the iBuying algorithms failed Zillow, and what it says about the business world’s love affair with AI. GeekWire, 3 November 2021. https://www.geekwire.com/2021/ibuying-algorithms-failed-zillow-says-business-worlds-love-affair-ai/ [last visited on 20 January 2022]
2022-01-20
Algorithm to detect possible fraud in social services
British Department for Work and Pensions
United Kingdom (UK)
N/A
social services
labour and employment
predicting human behaviour
public administration
other kinds of discrimination

No, and the British Department for Work and Pensions (DWP) has "rebuffed attempts to explain how the algorithm behind the system was compiled". (Savage, 2021)

N/A
N/A

In November 2021, The Guardian reported that the Greater Manchester Coalition of Disabled People (GMCDP), with the help of campaign group Foxglove, sent a letter to the DWP "demanding details of the automated process that triggers the investigations" into possible fraud, and which seems to disproportionately target disabled people (Savage, 2021).

active use

The British government's Department for Work and Pensions (DWP) uses an algorithmic system to try and flag potential fraudulent welfare applications. It's not clear when exactly the algorithm was introduced by the DWP nor how exactly it works. (Savage, 2021)

In November 2021, it was reported that the DWP's algorithm seem to disproportionately target disabled people as possible fraudsters. After being flagged by the system, people were "subjected to stressful checks" and could face "an invasive and humiliating investigation lasting up to a year" (Savage, 2021).

  • Bloom, D. (2022). DWP faces legal action to reveal 'algorithms' that flag claims of benefit fraud. Daily Mirror, 10 February 2022. Available on https://www.mirror.co.uk/news/politics/dwp-faces-legal-action-reveal-26199571 [last visited on 28 February 2022]
  • Foxglove (2021). NEW CASE: secret algorithm targets disabled people unfairly for benefit probes – cutting off life-saving cash and trapping them in call centre hell. Foxglove News, 1 December 2021. https://www.foxglove.org.uk/2021/12/01/secret-dwp-algorithm/ [last visited on 20 January 2022]
2022-01-20
PROMETEA | Optimising bureaucratic processes
Buenos Aires Public Prosecutor Office
Buenos Aires (Argentina)
Argentina
Buenos Aires Public Prosecutor Office
justice and democratic processes
automating tasks
public administration
weakening of democratic practices

We haven't found information about the system being externally audited, but its authors say that Prometea is traceable and explainable: "Prometea works through traceable, auditable and reversible machine learning. This means that it is not a 'black box', and that it is perfectly possible to establish what is the underlying reasoning that makes the prediction. (...) As a rule, all the methodology used to design Prometea is accessible, traceable and understandable, in a clear and familiar language to describe how results are reached." (Corvalán & al, 2020)

2017-10-XX
N/A

N/A

active use

Prometea was developed by the Public Prosecutor’s Office of the City of Buenos Aires as a "supervised learning" system to help civil servants by automating tasks as an "optimizer of bureaucratic processes". It functions as an interactive expert system that, like a voice assistant, requests inputs from the civil servant and generates outputs that would've taken much longer to get without using the software. "For example, from 5 questions, you are able to complete a legal opinion by which you must reject an appeal by extemporaneous". (Corvalán & al, 2020)

Prometea works both by completely automating processes ("the algorithms connect data and information with documents automatically. The document is generated without human intervention") and by automating processes with reduced human intervention ("in many cases, it is necessary that the persons interact with an automated system, in order to complete or add value to the creation of a document"). (Corvalán & al, 2020)

  • Abogados.com.ar (2019). PROMETEA: el primer sistema de inteligencia artificial predictivo de la justicia se presenta en el “Mundial de Inteligencia Artificial”. Abogados.com.ar. 27 May 2019. Available on https://abogados.com.ar/prometea-el-primer-sistema-de-inteligencia-artificial-predictivo-de-la-justicia-se-presenta-en-el-mundial-de-inteligencia-artificial/23523 [last visited on 15 December 2021]
  • Berchi, M. (2020). La inteligencia artificial se asoma a la justicia pero despierta dudas éticas. El País. 4 March 2020. https://elpais.com/retina/2020/03/03/innovacion/1583236735_793682.html [last visited on 15 December 2021]
2021-12-15
Algorithms to screen high school student applications
New York City high schools
New York City (US)
United States (US)
N/A
education and training
profiling and ranking people
public administration
private sector
racial discrimination
socioeconomic discrimination

We haven't found information of independent audits of such algorithms, but a data compilation and investigation by The Markup and The City media outlets found that screening algorithms at New York City high schools discriminate disproportionately against Black and Latino applicants. (Lecher & Varner, 2021)

N/A
N/A

N/A

active use

In New York City, some high schools use screening processes, in many cases involving automated algorithms, to pick their students among all the applicant they receive. Different schools use different mechanisms, and generally these processes are opaque and it's not clear why exactly some students are accepted and why others aren't at the same school.

In May 2021, The Markup and The City media outlets published the results of a data investigation that showed that these screening processes discriminate disproportionately against students of colour, and especially those of Black and Latino origin.

  • Lecher, C. & Varner, M. (2021). NYC’s School Algorithms Cement Segregation. This Data Shows How. The Markup. 26 May 2021. Available on https://themarkup.org/news/2021/05/26/nycs-school-algorithms-cement-segregation-this-data-shows-how [last visited on 15 December 2021]
  • NYC.gov. (2020). Mayor de Blasio and Chancellor Carranza Announce 2021-22 School Year Admissions Process. NYC, the Official Website of the City of New York. 18 December 2020. Available on https://www1.nyc.gov/office-of-the-mayor/news/874-20/mayor-de-blasio-chancellor-carranza-2021-22-school-year-admissions-process [last visited on 15 December 2021]
2021-12-15
Risk classification model | Assessing likelihood of fraud in social services
Dutch tax authorities
Netherlands
Dutch tax authorities
social services
predicting human behaviour
profiling and ranking people
public administration
racial discrimination
other kinds of discrimination
socioeconomic discrimination
weakening of democratic practices

No, but Amnesty International published a report on the functioning of the algorithm (Amnesty, 2021)

2013-XX-XX
N/A

N/A

not known

This is an algorithmic decision-making system that created risk profiles of childcare benefits applicants "who were supposedly more likely to submit inaccurate applications and renewals and potentially commit fraud. Parents and caregivers who were selected by this system had their benefits suspended and were subjected to investigation". (Amnesty, 2021)

"The tax authorities used information on whether an applicant had Dutch nationality as a risk factor in the algorithmic system. “Dutch citizenship: yes/no” was used as a parameter in the risk classification model for assessing the risk of inaccurate applications. Consequently, people of non-Dutch nationalities received higher risk scores. The use of the risk classification model amounted to racial profiling. The use of nationality in the risk classification model reveals the assumptions held by the designer, developer and/or user of the system that people of certain nationalities would be more likely to commit fraud or crime than peopl

  • Amnesty International (2021). Xenophobic machines: Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal. Amnesty International, 25 October 2021. Available on https://www.amnesty.org/en/documents/eur35/4686/2021/en/ [last visited on 6 December 2021]
2021-12-06
VI-SPDAT | Identifying the most vulnerable people to be served first
Municipal governments in the US
United States (US)
OrgCode
Common Ground
social services
profiling and ranking people
public administration
socioeconomic discrimination
gender discrimination
racial discrimination
religious discrimination
other kinds of discrimination
weakening of democratic practices

Yes, an academic paper published in 2018 examined the algorithm's reliability and validity: "Results suggest there are challenges to the reliability and validity of the VI-SPDAT in practical use. VI-SPDAT total scores did not significantly predict risk of return to homeless services, while type of housing was a significant predictor. Vulnerability assessment instruments have important implications for communities working to end homelessness by facilitating prioritization of scarce housing resources. Findings suggest that further testing and development of the VI-SPDAT is necessary." (Brown, 2018)

2013-XX-XX
N/A

N/A

active use

VI-SPDAT (Vulnerability Index — Service Prioritization Decision Assistance Tool) is an algorithmic system designed to assist civil servants or community workers in the process of providing housing services to homeless people. The algorithm is meant to assess each person's different situation to assign a "vulnerability index" that may help in allocating assistance to the people deemed most in need. (Thompson, 2021; OrgCode, 2021)

After its launch by OrgCode and Common Ground in 2013, it was considered so helpful and successful that at least 40 US states started using it. (Thompson, 2021)

  • Brown, M. & al. (2018). Reliability and validity of the Vulnerability Index-Service Prioritization Decision Assistance Tool (VI-SPDAT) in real-world implementation. Journal of Social Distress and Homelessness. Volume 27, 2018 - Issue 2. Available on https://doi.org/10.1080/10530789.2018.1482991 [last visited on 6 December 2021]
  • OrgCode (2021). A message from OrgCode on the VI-SPDAT Moving Forward. OrgCode Blog, 25 January 2021. Available on https://www.orgcode.com/blog/a-message-from-orgcode-on-the-vi-spdat-moving-forward [last visited on 6 December 2021]
2021-12-06
VALENCIA IA4COVID | Generating policy recommendations
Valencian regional government
Spain
Valencian region (Spain)
Valencia IA4COVID
justice and democratic processes
compiling personal data
predicting human behaviour
public administration
weakening of democratic practices

N/A

2020-XX-XX
N/A

N/A

active use

This is an algorithmic system developed by an ad-hoc team of experts called Valencia IA4Covid, working hosted by the Alicante node of the ELLIS Foundation and working with the Valencian regional government.

The aim of the algorithm is to prescribe what regulations would be "optimal" for the authorities to implement to keep the social costs of restrictions and the number of Covid-19 cases to a minimum.

  • ELLIS Foundation Alicante (2021). 500K XPrize Challenge on Pandemic Response sponsored by Cognizant. https://ellisalicante.org/xprize [last visited on 19 November 2021]
  • ELLIS Society (2021). Dr. Nuria Oliver and her team take the grand prize in global pandemic response challenge competition. https://ellis.eu/news/dr-nuria-oliver-and-her-team-take-the-grand-prize-in-global-pandemic-response-challenge-competition [last visited on 19 November 2021]
2021-11-19
ISSA | Machine-learning chatbot about social security
Spanish Ministry for Social Security
Spain
Google (owned by Alphabet)
Spanish Ministry for Social Security
social services
simulating human speech
public administration
other kinds of discrimination

N/A

2020-04-XX
N/A

N/A

active use

"With all of its offices forced to close due to social distancing measures made necessary by COVID-19, Spain’s Social Security administration" needed a "solution that could replace the customer service of several physical offices and handle the high volume of inquiries during the pandemic. The solution also had to consider General Data Protection Regulation (GDPR) guidelines and offer secure online support to citizens, from providing appointment booking to digitizing documents. (...) Working closely with Google Cloud, [the Spanish Ministry for Social Security] developed the International Social Security Association (ISSA) Social Security assistant based on Dialogflow, the Google Cloud artificial intelligence technology used to create chatbots that are trained to learn the natural language of users to provide interactive responses. At the same time, Spain’s Social Security administration developed a set of cloud-based applications to collect citizens’ requests for minimum vital income b

  • Google Cloud (2020). Spain’s Social Security administration: Using AI to deliver benefits to citizens more efficiently, https://cloud.google.com/customers/seguridad-social
  • Spanish Ministry for Social Security (2020a). ISSA, el asistente virtual de la Seguridad Social suma más de 2 millones de interacciones en apenas un mes de vida. 17 June 2020, https://revista.seg-social.es/2020/06/17/issa-el-asistente-virtual-de-la-seguridad-social-suma-mas-de-2-millones-de-interacciones-en-apenas-un-mes-de-vida/
2021-11-11
VIOGÉN | Assessing the risk of reincidence in cases of gender-based violence
Spanish Ministry of the Interior
Spain
Spanish Ministry of the Interior
policing and security
predicting human behaviour
evaluating human behaviour
public administration
other kinds of discrimination

VioGén has not been externally audited.

In 2018, the Spanish Ministry of Interior published a book about the development of the VioGén protocol (Spanish Ministry of the Interior, 2018). This book provides quite an informative account of how the algorithmic system was designed and how it works. However, the book doesn't reveal the algorithm's code or how much weight the different questions in the protocol are given (by themselves and in relation to other questions) for the algorithm to generate a risk score.

2007-XX-XX
N/A

N/A

active use

"VioGén was launched in 2007 following a 2004 law on gender-based violence that called for an integrated system to protect women [Spanish Official Gazette, 2004]. Since then, whenever a woman makes a complaint about domestic violence a police officer must give her a set of questions from a standardized form. An algorithm uses the answers to assess the risk that the women will be attacked again. These range from: no risk observed, to low, medium, high or extreme risk. If, later on, an officer in charge of the case thinks a new assessment is needed, VioGén includes a second set of questions and a different form, which can be used to follow up no the case and which the algorithm uses to produce an updates assessment of the level of risk. The idea was that the VioGén protocol would help police officers all over Spain produce consistent and standardized evaluations of the risks associated with domestic violence, and that all the cases that are denounced would benefit from a more structured

  • Spanish Ministry of the Interior (2021). Public Services. Violence against women. Sistema VioGén. http://www.interior.gob.es/en/web/servicios-al-ciudadano/violencia-contra-la-mujer/sistema-viogen
  • Spanish Ministry of the Interior (2018). La valoración policial del riesgo de violencia contra la mujer en España. VioGén. Sistema de seguimiento integral en los casos de violencia de género. Available as a PDF document in Spanish on http://www.interior.gob.es/documents/642012/8791743/Libro+Violencia+de+G%2525C3%2525A9nero/19523de8-df2b-45f8-80c0-59e3614a9bef
2021-11-04
BOSCO | Allocation of financial aid for the electricity bill
Spanish electricity companies
Spain
Spanish Secretariat for Energy
social services
automating tasks
profiling and ranking people
public administration
other kinds of discrimination

No. Spanish investigative organisation Civio asked the Spanish government to release the source code of BOSCO and the government refused. Civio took the government to court and the case is ongoing (Civio, 2019).

2017-XX-XX
N/A

"Civio filed an administrative appeal on June 20 after the Council of Transparency and Good Governance (CTBG) declined to force the release of source code of software dismissing eligible aid applicants. (...)

"The complexity of the process combined with the malfunctioning software, BOSCO, and lack of information about the nature of rejections resulted in only 1,1 million people out of 5,5 potential beneficiaries profiting from the so-called Bono Social. The former government estimated 2,5 million people would receive the subsidy. (...)

active use

"In the midst of the economic crisis, in 2009, the Spanish government passed a law subsidizing the electricity bills of about five million poor households. The subsidy, called social bonus or bono social in Spanish, has been fought in court by the country’s electric utilities ever since, with some success. Following a 2016 ruling, the government had to introduce new, tighter regulations for the social bonus and all beneficiaries had to re-register by 31 December 2018.

"Half a million refusals

  • Spanish Ministry for Ecological Transition (2021). Bono Social. https://www.bonosocial.gob.es and https://energia.gob.es/bono-social
  • Spanish Official Gazette (BOE) (2017). Real Decreto 897/2017Real Decreto 897/2017, de 6 de octubre, por el que se regula la figura del consumidor vulnerable, el bono social y otras medidas de protección para los consumidores domésticos de energía eléctrica. https://www.boe.es/eli/es/rd/2017/10/06/897/con
2021-10-29
Algorithm to predict the risk of emergencies in elderly care
N/A
N/A
Bismart
social services
predicting human behaviour
evaluating human behaviour
profiling and ranking people
public administration
socioeconomic discrimination
threat to privacy

N/A

N/A
N/A

N/A

not known

A Spanish firm called Bismart has developed software that allows public agencies to predict the needs of dependent elderly people before an emergency occurs. The system aggregates data related to “social services, health, population, economic activity, utility usage, waste management and more” to give risk assessment analysis for the elderly (Algorithm Watch 2019). The company hopes to transition home care for the elderly from a “palliative to a proactive approach”, as well as allocate health care resources in the most efficient way possible (ibid).

Both the right to privacy and to non-discrimination could potentially be affected by this software. Given the amount of data sources that are processed in order to make the predictions, the system poses a substantial risk to reproducing structural biases already existing in society.

  • AW AlgorithmWatch gGmbH (2019). “Automating Society: Taking Stock of Automated Decision-Making in the EU”. Open Society Foundation, p.123.
2021-10-11
Algorithm to identify children's need of social services
Espoo Municipality
Espoo (Finland)
Finland
Tieto
social services
profiling and ranking people
evaluating human behaviour
predicting human behaviour
public administration
threat to privacy

Yes, by 2019 the Espoo Municipality was discussing ethical questions related to the use of such alert systems with expert organisations such as The Finnish Center for Artificial Intelligence (FCAI).

2016-XX-XX
N/A

N/A

not known

In Espoo, the second largest Finnish city, a software aimed at identifying risk factors associated with the need for social and medical services among children has been deployed at dubious ethical costs (Algorithm Watch 2019). The model, developed by a firm called Tieto, analyzes anonymized health care and social care data of the city’s population and client data of early childhood education. Producing preliminary results, it discovered “approximately 280 factors that could anticipate the need for child welfare services” (ibid). The system was unprecedented in Finland; no other program had ever used machine learning to integrate and analyze public service data (ibid). The next iteration of the system will seek to “utilise AI to allocate services in a preventive manner and to identify relevant partners to cooperate with towards that aim” (ibid).

Finnish authorities are taking ethical and legal issues seriously, but the use of predictive algorithms by public agencies tasked with allocati

  • AW AlgorithmWatch gGmbH (2019): “Automating Society: Taking Stock of Automated Decision-Making in the EU”. Open Society Foundation, p.62.
2021-08-11
Algorithm to match job applicants with offers
DigitalMinds
Finland
DigitalMinds
labour and employment
profiling and ranking people
compiling personal data
private sector
threat to privacy
socioeconomic discrimination

N/A

2017-XX-XX
N/A

N/A

not known

The Finnish start-up DigitalMinds has created software aimed at efficiently matching job applicants profiles with optimal ones each employment descriptions (Algorithm Watch 2019). The firm hopes to remove human participation from the job application process, ideally speeding up and making the process more reliable. The system draws from public online interfaces like Twitter, Facebook, Gmail and Microsoft Office to create a digital profile that includes a complete personality assessment (ibid).

DigitalMinds claims that it has received no objections to its analytic practices from prospective candidates thus far (ibid). However, perhaps the consent given by a job applicant should be questioned given the distinct power asymmetry between a potential employer and an applicant.

  • AW AlgorithmWatch gGmbH (2019): “Automating Society: Taking Stock of Automated Decision-Making in the EU”. Open Society Foundation, p.51.
2021-08-11
SHOTSPOTTER | Automatic detection of gunshots
Oakland Police Department
New York City Police Department
Cincinnati Police Department
Denver Police Department
Chicago Police Department
Saint Louis County Police Department
San Diego Police Department
Pittsburgh Police Department
Oakland (US)
United States (US)
New York City (US)
Cincinatti (US)
Denver (US)
Chicago (US)
Saint Louis County (US)
San Diego (US)
Pittsburgh (US)
N/A
policing and security
recognising sounds
public administration
threat to privacy

N/A

2012-XX-XX
N/A

N/A

not known

The Oakland Police Department (OPD) has deployed an algorithmic tool called ShotSpotter to fight and reduce gun violence (ShotSpotter 2018). The system detects gunshots through sound-monitoring microphones (Gold 2015). The gathered data are processed by an algorithm that identifies the type of event and alerts the police. According to the developer, the tool has been credited a reduction of gunfire incidents by 29% from 2012-2017 in Oakland (ibid). The software has also been implemented in several American cities to similar degrees of success, including New York City, Cincinnati, Denver, Chicago, St. Louis County, San Diego, and Pittsburgh.

The system faces controversy regarding the installation of its sensors in public spaces and their capacity for surveillance activities. For example, ShotSpotter has been found to erroneously record private conversations in various instances (Goode 2012). While ShotSpotter has been placed on university campuses all across America due to the prevalenc

  • Carr, J. and Doleac, J. (2016). The Geography, Incidence, and Underreporting of Gun Violence: New Evidence Using Shotspotter Data. SSRN Electronic Journal. Available on http://www.hoplofobia.info/wp-content/uploads/2014/05/Carr_Doleac_gunfire_underreporting.pdf [Accessed 25 Oct. 2018]
  • Gold, H. (2015). ShotSpotter: gunshot detection system raises privacy concerns on campuses. The Guardian. Available on https://www.theguardian.com/law/2015/jul/17/shotspotter-gunshot-detection-schools-campuses-privacy [Accessed 22 Oct. 2018]
2021-08-11
Algorithm to manage drivers on a riding app
Uber
worldwide
Uber
labour and employment
evaluating human behaviour
profiling and ranking people
private sector
threat to privacy
manipulation / behavioural change

N/A

2009-XX-XX
N/A

N/A

active use

Ride-sharing app Uber employs all of its drivers as independent contractors and continues to manage them without human supervision, only algorithms. The firm has faced criticism for creating a work environment that feels like a world of “constant surveillance, automated manipulation and threats of ‘deactivation'” (Rosenblat 2018). One African-American driver from Pompano Beach, Fla., Cecily McCall, terminated a trip early because a passenger called her ‘stupid’ and a racial epithet. She explained the situation to an Uber support representative and promptly received an automated message that said, “We’re sorry to hear about this. We appreciate you taking the time to contact us and share details” (ibid). The representative’s only recourse was to not match the passenger with McCall in the future. Outraged, McCall responded, “So that means the next person that picks him up he will do the same while the driver gets deactivated” (ibid). McCall had noticed how Uber’s algorithm effectively pun

  • Rosenblat, A (2018): “When your boss is an algorithm”. In The New York Times https://www.nytimes.com/2018/10/12/opinion/sunday/uber-driver-life.html
2021-08-11
Algorithm to identify families' need of child social services
Hackney Council
Thurrock Council
Hackney (UK)
Thurrock (UK)
United Kingdom (UK)
Xantura
social services
profiling and ranking people
predicting human behaviour
public administration
threat to privacy
socioeconomic discrimination

N/A

2018-XX-XX
N/A

N/A

not known

In 2018, British local authorities implemented predictive analytical software designed to identify families in need of child social services. The algorithms were oriented towards improving the allocation of public resources and preventing child abuse (McIntyre and Pegg 2018). In order to build the predictive system, data from 377,000 people were incorporated into a database managed by several private companies. The town councils of Hackney and Thurrock both hired a private company, Xantura, to develop a predictive model for their children’s services teams. Two other councils, Newham and Bristol, developed their own systems internally.

Advocates of the predictive software argue that they enable councils to better target limited resources, so they can act before tragedies happen. Richard Selwyn, a civil servant at the Ministry of Housing, Communities and Local Government argues that “It’s not beyond the realm of possibility that one day we’ll know exactly what services or interventions w

  • McIntyre, N and Pegg, D (2018): “Councils use 377,000 people’s data in efforts to predict child abuse” In The Guardian https://www.theguardian.com/society/2018/sep/16/councils-use-377000-peoples-data-in-efforts-to-predict-child-abuse
2021-08-11
Algorithm to suggest personalised audiovisual content to users
Netflix
worldwide
Netflix
communication and media
evaluating human behaviour
predicting human behaviour
profiling and ranking people
private sector
threat to privacy

Some studies have been made to suggest a solution to Netflix privacy issues, including:

– McSherry, F., Mironov, I (2009): Differentially Private Recommender Systems: Building Privacy into the Netflix Prize Contenders. In KDD’09, Paris, France.

N/A
N/A

N/A

active use

Netflix employs algorithmic processing to create profiles of users from collected metadata and to predict which content they are most likely to consume. Every time one of its 300 million users selects a series or a movie, the system gathers a host of data related to the media consumption, like clicks, pauses, and indicators that the user stopped watching, et al (Narayanan 2008). Netflix aggregates such data to fashion profiles that intuit information concerning the user.

Even though the company has established several security mechanisms and uses proxies of users’ personal data to construct their profiles, the system is quite vulnerable to deanonymization attacks. Researchers Narayanan et al. (2008) were able to break the anonymization of Netflix’s database by analyzing some proxy information provided by users. Narayanan et al.’s breakthrough evidenced a “new class of statistical deanonymization attacks against high-dimensional micro-data, such as individual preferences, recommendation

  • Narayanan, A., Shmatikov, V (2008): Robust De-anonymization of Large Sparse Datasets. In IEEE Symposium on Security and Privacy, pp. 111–125. IEEE Computer Society.
  • Zhao, Y., Chow, S.S.M (2015): Privacy preserving collaborative filtering from asymmetric randomized encoding. In Böhme, R., Okamoto, T. (eds.) FC 2015. LNCS, vol. 8975, pp. 459–477. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-47854-7_28
2021-08-11
Algorithm to assess potential harm by gang members
London Metropolitan Police
London (UK)
United Kingdom (UK)
N/A
policing and security
evaluating human behaviour
predicting human behaviour
profiling and ranking people
public administration
socioeconomic discrimination
racial discrimination

N/A

2012-XX-XX
N/A

N/A

not known

The Metropolitan Police of London has implemented an automated system to assess the potential harm that gang members impose on public safety. Since 2012, the initiative has used algorithmic processing to collect, identify, and exchange data about individuals related or belonging to gangs (Amnesty International 2018). The system uses the gathered information to calculate two scores: 1) the probability an individual joins a gang and 2) their level of violence.

However, the specific metrics and criteria used to assign “harm scores” have not been revealed by the Metropolitan Police, which raises questions about transparency. Amnesty International has pointed out that the ‘’Ending Gang and Youth Violence Strategy for London’’ from 2012 gives insights regarding the issue. According to it, each gang member would be scored taking into account the number of crimes he/she has committed in the past three years “weighted according to the seriousness of the crime and how recently it was committed”

  • Amnesty International United Kingdom Section. (2018). Trapped in the Matrix: Secrecy, stigma, and bias in the Met’s Gangs Database. Available at: https://www.amnesty.org.uk/files/reports/Trapped%20in%20the%20Matrix%20Amnesty%20report.pdf [Accessed 5 Sep. 2018].
  • Dodd, V. (2018). UK accused of flouting human rights in ‘racialised’ war on gangs. The Guardian. [online] Available at: https://www.theguardian.com/uk-news/2018/may/09/uk-accused-flouting-human-rights-racialised-war-gangs [Accessed 29 Oct. 2018].
2021-08-11
Algorithm to predict 'hot zones' for crimes
Manchester Police Department
Manchester (UK)
United Kingdom (UK)
IBM
policing and security
predicting human behaviour
public administration
threat to privacy
racial discrimination
socioeconomic discrimination

N/A

2012-XX-XX
N/A

N/A

not known

The Manchester Police Department has implemented a predictive policing system developed by IBM, which is able to identify hot zones for crimes in the city, mainly robbery, burglary and theft (Bonnette 2016). On the basis of historical data on crime hotspots, which contains a variety of variables related to crime (location, day, weather conditions, etc.), the system predicts where future criminal events are likely to take place (Dearden 2017). The system has been reported to cause a reduction in criminal activity in the targeted areas, as well as an improvement in public confidence in the police (UCL 2012).

As is the case with other predictive policing tools, the system presents great risks to privacy as well as the reproduction of structural societal biases. It has been well documented that when algorithms train from historical data sets, they learn how to emulate racially biased policing (ibid).

  • Dearden, L. (2017). How technology is allowing police to predict where and when crime will happen. The Independent. [online] Available at: https://www.independent.co.uk/news/uk/home-news/police-big-data-technology-predict-crime-hotspot-mapping-rusi-report-research-minority-report-a7963706.html#r3z-addoor [Accessed 26 Oct. 2018].
  • Bonnette, Greg (2016). Real World Predictive Policing: Manchester, NH Police Department. Available at: https://www.slideshare.net/gregbonnette/real-world-predictive-policing-manchester-nh-police-department [Accessed 5 Sep. 2018].
2021-08-11
Algorithm to do facial recognition for the police
New York City Police Department
New York City (US)
United States (US)
IBM
policing and security
identifying images of faces
public administration
threat to privacy
racial discrimination

N/A

2010-XX-XX
N/A

N/A

not known

The New York Police Department has developed its own algorithmic facial identification system used in both routine investigations and concrete criminal events, such as terrorist attacks (Fussell, 2018). The NYPD system compares biometric data stored in its database in order to check previous criminal records and identify suspects (Garvie, et al., 2016). Critics argue the system lacks transparency, hiding internal operations that produced biased results (Ibid).

Controversy exists surrounding the system’s tendency for erroneous misidentification. Facial recognition has been proven to only be accurate when dealing with white males (Lohr, 2018). A study demonstrated that ‘the darker the skin, the more errors arise” (ibid). The technology’s disproportionate inaccuracy with people of color has sparked debate on the impact of such systems on the privacy and civil rights of racial minorities, leading Georgetown lawyers to sue the NYPD over the opacity of the facial recognition software (Fussel

  • Lohr, S. (2018). Facial Recognition Is Accurate, if You’re a White Guy. The New York Times. [online] Available at: https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html  [Accessed 18 Oct. 2018].
  • Joseph, G. and Lipp, K. (2018). IBM used NYPD surveillance footage to develop technology that lets police search by skin color. [online] The Intercept available at: https://theintercept.com/2018/09/06/nypd-surveillance-camera-skin-tone-search/ [Accessed 25 Oct. 2018].
2021-08-11
Algorithm to assign a risk score to hospital patients
Atrium Health (formerly Carolinas HealthCare)
North Carolina (US)
South Carolina (US)
United States (US)
N/A
social services
profiling and ranking people
evaluating human behaviour
public administration
private sector
socioeconomic discrimination

N/A

N/A
N/A

N/A

not known

The Carolinas HealthCare, a large network of hospitals and services, is currently applying an algorithmic risk-assessment system to target high-risk patients by using a variety of data such as purchasing records and other environmental variables (Data & Civil Rights Conference, 2014). Algorithmic systems are being used to analyse consumer habits of current or potential patients and construct specific risk profiles. According to a report, one of the largest hospitals in North Carolina “is plugging data for 2 million people into algorithms designed to identify high-risk patients” (Pettypiece and Robertson, 2014).

In general data analytics plays a considerable role the healthcare industry. While it poses to modernize the industry, bringing about unprecedented efficiency and advances, it also creates a unique set of challenges related to data privacy. People’s medical conditions and health can be deduced from a variety of indicators like purchasing history, phone call patterns, online brow

  • Rosenblat, A; Wikelius, K; Boyd, D; Gangadharan, S.P; Yu, C (2014): “Data & Civil Rights: Health Primer” In Data & Civil Rights Conference http://www.datacivilrights.org/pubs/2014-1030/Health.pdf [Accessed 17 Oct. 2018].
  • Pettypiece, Shannon and Jordan Robertson, “Your Doctor Knows You’re Killing Yourself. The Data Brokers Told Her,” In Bloomberg, June 26, 2014. http://www.bloomberg.com/news/2014-06-26/hospitals-soon-see-donuts-to-cigarette-charges-for-health.html [Accessed 5 Sep. 2018]
2021-08-11
Algorithm to identify facial features when taking pictures
Nikon
worldwide
Nikon
business and commerce
recognising facial features
private sector
racial discrimination

N/A

2009-XX-XX
N/A

N/A

active use

In 2010, a Taiwanese-American family discovered what seemed to be a malfunction in Nikon Coolpix S630 camera: every time they took a photo of each other smiling, a message flashed across the screen asking, “Did someone blink?” No one had, so they assumed the camera was broken (Rose, 2010).

Face detection, one of the latest smart technologies to trickle down to consumer cameras, is supposed to make taking photos more convenient. Some cameras with face detection are designed to warn you when someone blinks, others are triggered to take the photo when they see that you are smiling (Rose, 2010). Nikon’s camera was not broken, but their face detection software was wildly inaccurate – that is unless you’re Caucasian. Nikon has failed to comment on the error in their software but, Adam Rose of TIME magazine has offered an explanation: the algorithm probably learned to determine ‘blinking’ from a dataset of majority Caucasian people (ibid). The software has not been trained with Asian eyes and

  • Rose, A (2010): “Are Face-Detection Cameras Racist?” In Time http://content.time.com/time/business/article/0,8599,1954643,00.html
2021-08-11
Google AdSense algorithm
Google (owned by Alphabet)
worldwide
Google (owned by Alphabet)
business and commerce
communication and media
profiling and ranking people
private sector
gender discrimination

N/A

2000-XX-XX
N/A

N/A

active use

A team of researchers from Carnegie Mellon University discovered that female job seekers are much less likely to be shown adverts on Google for highly paid jobs than men (Datta, Tschantz and Datta, 2015). They deployed an automated testing rig called AdFisher that created over 17,370 fake jobseeking profiles. The profiles were shown 600,000 advertisements by Google’s advertising platform AdSense which the team tracked and analyzed (Ibid.). The researchers’ analysis found discrimination and opacity. Males were disproportionately shown ads encouraging the seeking of coaching services for high paying jobs. The researchers were unable to deduce why such discrimination occurred given the opacity of Google’s ad ecosystem, however, they offer the experiment as a starting point for possible internal investigation or for the investigation of regulatory bodies (ibid).

  • Datta A, Tschantz MC and Datta A (2015): “Automated experiments on ad privacy settings. A Tale of Opacity, Choice and Discrimination” In Proceedings on Privacy Enhancing Technologies, pp. 92-112. https://content.sciendo.com/view/journals/popets/2015/1/article-p92.xml
  • The Guardian’s Article about the study: Gibbs, S (2015): “Women less likely to be shown ads for high-paid jobs on Google, study shows”. In The Guardian https://www.theguardian.com/technology/2015/jul/08/women-less-likely-ads-high-paid-jobs-google-study
2021-08-11
Algorithm to predict customers' future purchases
Target
United States (US)
N/A
business and commerce
predicting human behaviour
profiling and ranking people
evaluating human behaviour
private sector
threat to privacy

N/A

2012-XX-XX
N/A

N/A

not known

Target, a large American retailer, implemented in 2012 a software designed to improve customer tracking practices in order to obtain better predictions over customers’ future purchases. The system would create profiles of customers through credit card information and would cater advertisements and target coupons to profiles with certain purchasing behavior. For example, the system recognized 25 products that, when purchased together, suggested that the buyer had a high probability of being pregnant (ibid). Upon Target identifying a possible pregnant customer, it would send coupons for maternity products, baby food, etc. In 2012, upon receiving emails with offers of products for babies, a father went to his local target demanding to speak with a manager. He yelled at the manager, saying, “My daughter got this in the mail! She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?” The manager apologized to him

  • Hill, K (2012): “How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did” In Forbres https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/#704fb5506668
2021-08-11
Google Photos algorithm
Google (owned by Alphabet)
worldwide
Google (owned by Alphabet)
communication and media
recognising images
private sector
racial discrimination

N/A

2015-XX-XX
N/A

N/A

not known

In May 2015 Google launched an app for sharing and storing photos called “Google Photos”. Google Photos has an algorithmic system that groups people’s photos, by automatically labelling their albums with tags. In the beginning, the labelling algorithms from Google Photos were working well, until a mistake of the system in recognizing dark-skinned faces was revealed, when tagging black people as “gorillas”. This bias mistake was noticed by Jacky Alcine, a young man from New York, when checking his Google Photos’ profile account on June 28th of 2015. He realized that the image-recognition algorithm generated a folder titled ‘Gorillas’, which only contained images of him and his friend.

Three years after the event, Wired revealed that Google “fixed” the problem but reducing the functionalities of the system, since the company would have blocked Google photos algorithms aimed at identifying gorillas altogether (Simonite, 2018).

  • Simonite, T (2018): “When it comes to gorillas, google photos remains blind”. In Wired https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind/
  • Vincent, J (2018): “Google ‘fixed’ its racista algorithm by removing gorillas from its image-labeling tech”. In The Verge: https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai
2021-08-11
Algorithm to manage the energy infrastructure
German Ministry of the Economy and Energy
Germany
N/A
infrastructure
predicting human behaviour
automating tasks
public administration
threat to privacy

N/A

2017-XX-XX
N/A

N/A

not known

The rise of renewable energy and the dynamic trade of electrical energy across European markets has prompted Germany to turn to algorithms to manage its energy infrastructure. Automated systems in the form of the SCADA systems (Supervisory Control and Data Acquisition) have brought smoother feed-in, control of decentralized electricity, and an added resilience to the energy supply (AlgorithmWatch 2019). These systems utilize machine learning and data analytics to monitor and avoid supply failure and mitigate power fluctuations (ibid). Unfortunately, the systems also bring with them some costs, namely, privacy and security. Government experts argue that the new smart systems are more susceptible to cyberattacks (ibid). Given the energy grid’s position as a fundamental piece of national infrastructure, this criticism should be taken as a serious concern. Advocacy groups point out that the automated systems also allow for surveillance capabilities within the private home.

  • AW AlgorithmWatch gGmbH (2019): “Automating Society: Taking Stock of Automated Decision-Making in the EU”. Open Society Foundation, p.81.
2021-08-10
Facebook search suggestion algorithm
Facebook (owned by Meta)
worldwide
Facebook (owned by Meta)
communication and media
predicting human behaviour
private sector
gender discrimination

N/A

2006-XX-XX
N/A

N/A

not known

Facebook’s search function has been accused of treating pictures of men and women very differently. The algorithm was reported to readily provide results to the search “photos of my female friends” but yield no results to the search “photos of my male friends”. According to Joseph (2019) of WIRED Magazine, the algorithm assumed that “male” was a typo for the word “female”. Worse, when users searched for “photos of my female friends” the search function auto-filled suggestions like “in bikinis” or “at the beach” (Joseph 2019). Researchers explain the disparity as the algorithm promoting popular searches over those that are rarely made. This means that since people often search for “photos of my female friends in bikinis”, the algorithm has grown adept at providing such photos and suggesting the search option. Despite being written as unassuming lines of code, algorithms assume the form of whatever data is fed to them, often resulting in the candid reflection of human wants, behavior, te

  • Matsakis, L (2019): “A ‘Sexist’ search bug says more about us than Facebook”. In Wired https://www.wired.com/story/facebook-female-friends-photo-search-bug/
2021-08-10
Google Image Search algorithm
Google (owned by Alphabet)
worldwide
Google (owned by Alphabet)
communication and media
generating online search results
private sector
gender discrimination
  • Researchers suggested that search engine designers could actually use these study results to develop algorithms that would help work against gender stereotypes (Cohn, 2015).
  • March 2017: With growing criticism over misinformation in search results, Google is taking a harder look at potentially “upsetting” or “offensive” content, tapping humans to aid its computer algorithms to deliver more factually accurate and less inflammatory results (Guynn, 2017).
2001-XX-XX
N/A

N/A

active use

A 2015 study of the University of Maryland and the University of Washington highlighted a noticeable gender bias in the Google image search results when submitting queries related to various professions (Cohn 2015). The study not only found a severe under-representation of women in image search results of ‘CEO’ and ‘Doctor’ but demonstrated a causal linkage between the search results and people’s conceptions regarding real-world professional gender representation (ibid). Google declined to comment on the study’s findings.

  • Cohn, E (2015): “Google Image Search Has A Gender Bias Problem”. In HuffPost https://www.huffingtonpost.com/2015/04/10/google-image-gender-bias_n_7036414.html
  • Guynn, J (2017): “Google starts flagging offensive content in search results”. In USA TODAY https://eu.usatoday.com/story/tech/news/2017/03/16/google-flags-offensive-content-search-results/99235548/
2021-08-10
Google Translate algorithm
Google (owned by Alphabet)
worldwide
Google (owned by Alphabet)
communication and media
generating automated translations
private sector
gender discrimination

N/A

2006-XX-XX
N/A

N/A

active use

In an effort to combat gender bias in translations, Google Translate has transitioned to showing gender-specific translations for many languages (Lee 2018). The algorithm inadvertently replicates gender biases prevalent in society as it learns from hundreds of millions of translated texts across the web. This often results in the translation of words like “strong” or “doctor” as masculine and for other words, like “nurse” or “beautiful” as feminine (ibid). Given a biased training set of data, the algorithm has a longstanding gender bias problem. It remains to be seen if it can be entirely resolved by the new feature.

  • Kuczmarski, J (2018): “Reducing gender bias in Google Translate”. In Google Blog https://www.blog.google/products/translate/reducing-gender-bias-google-translate/
  • Lee, D (2018): “Google Translate now offers gender-specific translations for some languages”. In The Verge https://www.theverge.com/2018/12/6/18129203/google-translate-gender-specific-translations-languages
2021-08-10
Algorithm to rank job applicants
Amazon
worldwide
Amazon
labour and employment
profiling and ranking people
private sector
gender discrimination

N/A

2014-XX-XX
2017-XX-XX

N/A

no longer in use

Amazon has recently abandoned an AI recruiting tool that was biased towards men seeking technical jobs (Snyder 2018). Since 2014, an Amazon team had been developing an automated system that reviewed job applicant resumes. Amazon’s system successfully taught itself to demote resumes with the word “women’s” in them and to give lower scores to graduates of various women’s colleges. Meanwhile, it decided that words such as “executed” and “captured,” which are apparently used more often in the resumes of male engineers, suggested that a candidate should be ranked more highly (ibid). The system’s gender bias failures originate from the biased training data set that it was given to learn from: Amazon’s workforce is unmistakably male-dominated, and thus the algorithm sought after resumes with characteristically male attributes. The team attempted to remove the gender bias from the system but ultimately decided to ditch the program entirely in 2017 (ibid.).

  • Snyder, B (2018): “Amazon ditched AI recruiting tool that favoured men for technical jobs”. In The Guardian https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engine
  • Vincent, J (2018): “Amazon reportedly scraps internal AI recruiting tool that was biased against women”. In The Verge https://www.theverge.com/2018/10/10/17958784/ai-recruiting-tool-bias-amazon-report
2021-08-10
Algorithm to monitor internet traffic for the intelligence services
French Intelligence Services
France
N/A
policing and security
predicting human behaviour
profiling and ranking people
public administration
threat to privacy
state surveillance

N/A

2017-XX-XX
N/A

N/A

not known

‘Black boxes’ are algorithms that exist on Internet Service Provider (ISP) platforms that allow for intelligence communities to monitor internet traffic, often in order to surveil for terrorist threats. Two years after ‘black boxes’ were first legalized in France, the first one was implemented. The program as been criticized for lacking legal oversight and has not been audited for scope or specific objectives (Algorithm Watch 2019). No doubt, these systems have had immeasurable effects on privacy, freedom of speech, and equality.

  • AW AlgorithmWatch gGmbH (2019): “Automating Society: Taking Stock of Automated Decision-Making in the EU”. Open Society Foundation, p.71.
2021-08-10
Algorithm to identify hate language on Twitter
N/A
Spain
Autonomous University of Madrid
Spanish Ministry of the Interior
policing and security
evaluating human behaviour
automating tasks
public administration
state surveillance

N/A

2017-XX-XX
N/A

N/A

not known

Juan Carlos Pereira Kohatsu, a 24-year-old data scientist, developed an algorithm that detects hate levels across Twitter. The Spanish National Bureau for the Fight against Hate Crimes, an office of the Ministry of the Interior, co-developed the tool and hopes to deploy it to ‘react against local outbursts of hatred’ (Algorithm Watch 2019). According to “El País” (Colomé, 2018), the algorithm tracks about 6 million tweets in 24 hours, filtering more than 500 words linked to insults, sensitive topics, and groups that frequently suffer hate crimes. Pereira’s analysis suggests that the number of hateful tweets remains relatively stable day-to-day in Spain, ranging between 3,000 and 4,000 tweets a day.

Public authorities are still figuring out the practical purposes behind implementing the tool. And, questions regarding the algorithm’s method of training remain unresolved. Until now, the algorithm has learned to classify tweets between hateful and non-hateful tweets according to subjective

  • AW AlgorithmWatch gGmbH (2019): “Automating Society: Taking Stock of Automated Decision-Making in the EU”. Open Society Foundation, p.122.
  • Pérez Colomé, J (2018): “How much hate is there on Twitter? Not much, but it is constant and there is for everyone” In El País https://elpais.com/tecnologia/2018/11/01/actualidad/1541030256_106965.html
2021-08-04
Google Search algorithm
Google (owned by Alphabet)
worldwide
Google (owned by Alphabet)
business and commerce
communication and media
automating tasks
predicting human behaviour
private sector
racial discrimination
religious discrimination
social polarisation / radicalisation
disseminating misinformation

N/A

N/A
N/A

N/A

active use

Google’s search algorithm has exacted serious harm on minority groups. The search engine has been found guilty of featuring an auto-fill query “are Jews evil” and tagging African-Americans as “gorillas” within the images section. Google search has also been criticized for promoting Islamaphobia by suggesting offensive and dangerous queries through its autofill function. For example, if a user typed “does Islam” in the Google search bar, the algorithm’s first autofill suggestion was “does Islam permit terrorism” (Abdelaziz 2017). This has troubling effects in the real world given research has demonstrated a clear correlation between anti-Muslim searches and anti-Muslim hate crimes (Soltas and Stephens-Davidowitz 2015).

Google has announced that they will continuously be removing hateful and offensive content and tweaking their algorithm in order to permanently rid themselves of the problem, but experts are not optimistic it’s entirely possible. With millions of new pages coming online e

  • Abdelaziz, Rowaida (2017): “Google Search Is Doing Irreparable Harm to Muslims” In HuffPost. https://www.huffpost.com/entry/google-search-harming-muslims_n_59415359e4b0d31854867de8
  • Khan, S (2017): “Algorithms for Hate. Trying to See the Hidden Algorithms that Control the Web” In Medium https://medium.com/@ed_saber/algorithms-for-hate-83b72d7f855e
2021-08-04
Algorithm to detect potential threats on social media
Boston Police Department
Boston (US)
United States (US)
Geofeedia
policing and security
profiling and ranking people
public administration
racial discrimination
religious discrimination
socioeconomic discrimination
state surveillance

Yes, there is a study by New York University School of Law’s Brennan Center for Justice (Levinson-Waldman, 2018).

2014-XX-XX
N/A

N/A

not known

The Boston Police Department has implemented an algorithmic monitoring tool developed by Geofeedia (a social media intelligence platform) for detecting potential threats on social media. The software allows law enforcement agencies to trace social media posts and associate them with geographic locations, something that has reportedly been used to target political activists of all sorts by both police departments and private firms (Fang 2016). For instance, it was purportedly deployed against protestors in Baltimore as part of the pilot phase of the project (Brandom 2016). This triggered a reaction from major social media companies that denied Geofeedia access to their data according to a report by the American Civil Liberties Union (Cagle 2016).

On top of that, in 2016 the American Civil Liberties Union of Massachusetts (ACLUM) discovered that between 2014 and 2016 the BPD had been using a set of keywords to identify misconduct and discriminatory behaviors online without notifying the

  • Asghar, I. (2018). Boston Police Used Social Media Surveillance for Years Without Informing City Council. [Blog] ACLU. Available at: https://www.aclu.org/blog/privacy-technology/internet-privacy/boston-police-used-social-media-surveillance-years-without [Accessed 25 Oct. 2018].
  • Fang, L. (2016). THE CIA IS INVESTING IN FIRMS THAT MINE YOUR TWEETS AND INSTAGRAM PHOTOS. [online] The Intercept. Available at: https://theintercept.com/2016/04/14/in-undisclosed-cia-investments-social-media-mining-looms-large/ [Accessed 25 Oct. 2018].
2021-08-04
TAY | AI Twitter chatbot
Microsoft
worldwide
Microsoft
communication and media
simulating human speech
private sector
gender discrimination
racial discrimination

N/A

2016-XX-XX
N/A

N/A

not known

In March 2016, Microsoft developed an algorithm chatbot called “Tay”’ that was created to seem like a teenage girl. The algorithm was designed to learn by interacting with real people on Twitter and the messaging apps Kik and GroupMe.

Tay’s Twitter profile contained the following warning: “The more you talk the smarter Tay gets”. And at first, the chatbot seemed to be work: Tay’s posts on Twitter really matched that of a typical teenage girl. However, after less than a day, Tay started posting explicit racist, sexist, and anti-Semitic content (Rodriguez 2016). A Microsoft spokesperson could only defend the behavior by saying:

  • Rodriguez, A (2016): “Microsoft’s AI Millennial chatbot became a racist jerk after less than a day on Twitter”. In Quartz https://qz.com/646825/microsofts-ai-millennial-chatbot-became-a-racist-jerk-after-less-than-a-day-on-twitter/
  • Vincent, J (2016): “Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day” In The Verge https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
2021-08-04
Algorithm to select job candidates
Uber
United States (US)
N/A
labour and employment
profiling and ranking people
automating tasks
private sector
socioeconomic discrimination
racial discrimination
gender discrimination

N/A

N/A
N/A

N/A

not known

Women, among other marginalized communities, often have difficulties finding work due to inherent algorithmic biases (Bharoocha 2019). A brief overview of Uber’s technical workforce illustrates this point perfectly. In 2017, the company’s technological leadership was entirely composed of White and Asian persons, with 88.7 percent of employees being male. Uber utilizes an algorithm that they claim selects top talent and speeds up the hiring process. The system evaluates the resumes of previous successful hires at Uber, other hiring data, and select keywords that align with the job description (ibid). Given the fact that the majority of Uber’s previous successful hires were White and Asian males, it makes sense that the algorithm continues to discriminate against women and persons of color: the algorithm has been trained from biased data, and thus reproduces such bias. Textio, a firm that helps companies implement gender-neutral language in job descriptions, has demonstrated how many key

  • Bharoocha, H (2019): “Algorithmic Bias: How Algorithms Can Limit Economic Opportunity for Communities of Color”. In Common Dreams https://www.commondreams.org/views/2019/03/01/algorithmic-bias-how-algorithms-can-limit-economic-opportunity-communities-color 
  • For more information: https://www.bbc.com/news/business-44852852
2021-08-04
FORECAST PLUS | Finding promising university students
N/A
United States (US)
Noel-Levitz
education and training
profiling and ranking people
evaluating human behaviour
private sector
threat to privacy
socioeconomic discrimination
racial discrimination

N/A

N/A
N/A

N/A

not known

Consulting companies in the United States have begun selling proprietary algorithms to universities in order to help them to target and recruit the most promising candidates according to self-selecting criteria (O’Neil 2018). For example, the firm Noel-Levitz has created a software called “Forecast Plus” that can estimate and rate enrollment prospects by geography, gender, ethnicity, field of study, academics, et al. The company, RightStudent, aggregates and sells data on prospective applicants’ finances, scholarship eligibility, and learning disabilities to colleges (idem).

Such algorithms raise questions regarding transparency and inequality reproduction. On top of the algorithm and its criteria being kept secret from the public, scholars worry that the universities will use these algorithms in order to locate specific demographics of applicants that can boost their U.S. News & World Report ranking and pay full tuition rather than focus on offering educational opportunities to the en

  • O’Neil, C. (2018). Weapons of math destruction. [London]: Penguin Books.
  • Rightstudent.com. (n.d.). RightStudent – Recruit the Right Students.Right now!. [online] Available at: https://www.rightstudent.com/ [Accessed 7 Jun. 2019].
2021-08-03
GLADSAXE | Identifying vulnerable children
N/A
Denmark
N/A
social services
profiling and ranking people
evaluating human behaviour
predicting human behaviour
public administration
threat to privacy
socioeconomic discrimination
state surveillance

N/A

N/A
N/A

N/A

not implemented

The Danish government had plans to create an automated system called ‘Gladsaxe’ that would track the status of vulnerable children and preemptively put them under state custody before a real crisis struck (Algorithm Watch 2019). The model utilized a points-based system that weighted various domestic circumstances or aspects with point values, for example, mental illness (3000 points), unemployment (500 points), missing a doctor’s appointment (1000 points) or dentist’s appointment (300 points). Danes responded to news of the system with national uproar and mockery (“Oh no, I forgot the dentist. As a single parent I’d better watch out now…” (idem)).

‘Gladsaxe’ was a part of a larger Danish governmental ‘ghetto-plan’ to fight against ‘parallel societies’. The strategy created a host of category-criteria for a ‘ghetto’ and special guidelines that would be applied against that area, such as “higher punishments for crimes, forcing children into public daycare at an early age, lifting the pro

  • AW AlgorithmWatch gGmbH (2019): “Automating Society: Taking Stock of Automated Decision-Making in the EU”. Open Society Foundation, p.63.
2021-08-03
Belgian police predictive algorithm
Belgian Federal Police
Belgium
N/A
policing and security
predicting human behaviour
public administration
socioeconomic discrimination
racial discrimination

N/A

2016-XX-XX
N/A

N/A

not known

In 2016 police forces on the Belgian coast began to apply predictive policing software to their operations, echoing a larger movement and investment within the Belgian government to build centralized cloud-based police data infrastructure (Algorithm Watch 2019). The chief commissioner claims that the project has been successful and correlates a 40% drop in crime to the start of the project (idem). There are hopes to further bolster the system by connecting it to Automatic Number Plate Recognition cameras (idem). Belgian federal and local police have called for the expansion of predictive policing, believing it to be a critical tool that not only saves time and money but prevents and stops crime. According to a spokesperson of the Federal Police, these tools and systems are currently being engineered for the national scale with data sets being collected and aggregated from police forces and third-party sources (idem).

Similar to other predictive policing software, the model relies on hi

  • AW AlgorithmWatch gGmbH (2019): “Automating Society: Taking Stock of Automated Decision-Making in the EU”. Open Society Foundation, p.44
  • Dominique Soenens (2019): “De politie van de toekomst: iedereen verdacht?” In https://dominiquesoenens.wordpress.com/2016/06/09/de-politie-van-de-toekomst-iedereen-verdacht/ 
2021-08-03
VERIPOL | Identifying fraudulent complaints to the police
Spanish National Police
Spain
N/A
policing and security
evaluating human behaviour
public administration
socioeconomic discrimination

Yes. An impact assessment has been carried out (Quijano-Sánchez et al., 2018), but it does not account for issues of discrimination.

2018-XX-XX
N/A

N/A

not known

VeriPol was created by an international team of researchers in order to help police identify fraudulent reports. The system uses natural language processing and machine learning to analyze a call and predict the likelihood that the report is fake. It focuses its analysis on three critical variables within a complaint: the modus operandi of the aggression, the morphosyntax (grammatical and logical composition) of the report, and the amount of detail given by the caller (Objetivo Castilla-La Mancha Noticias 2018). The system learns by being exposed to two different data sets, one formed by false complaints and the other formed by regular complaints (Pérez Colomé 2018). The system has been tested on over one thousand occasions since 2015 by the Spanish National Police and has earned a 91% accuracy rate. VeriPol was made in response to a recent increase of fabricated reports of violent robberies, intimidation, and theft (Peiró 2019). Researchers claim that this tool could help improve the

  • Ucm.es. (n.d.). Veripol, inteligencia artificial a la caza de denuncias falsas. [online] Available at: https://www.ucm.es/otri/veripol-inteligencia-artificial-a-la-caza-de-denuncias-falsas/ [Accessed 6 Jun. 2019].
  • Quijano-Sánchez, L., Liberatore, F., Camacho-Collados, J. and Camacho-Collados, M. (2018). Applying automatic text-based detection of deceptive language to police reports: Extracting behavioral patterns from a multi-step classification model to understand how we lie to the police. Knowledge-Based Systems, 149, pp.155-168.
2021-08-03
RISCANVI | Predicting the risk of criminal recidivism
Catalan Justice Department
Catalonia (Spain)
Spain
N/A
justice and democratic processes
predicting human behaviour
public administration
socioeconomic discrimination

N/A

2009-XX-XX
N/A

N/A

not known

In 2009 Catalan prisons began using an algorithmic tool to evaluate the risk of violent recidivism (or reoffense of a violent crime after release) for all prisoners in the region. This mechanism, called e-Riscanvi (e-Riskchange or e-Riesgocambio), was established by the Catalan Consejeria de Justicia (Ministry of Justice) to assess the risk of violent reoffense even if they have no previous violent background. According to public documents, the ultimate aim of the system is to improve the treatment of inmates (Europa Press 2009). The system classifies behavior in four categories: self-directed (suicide or self-harm), intra-institutional (against inmates or prison staff), violation of sentence (escape or not return from a permit) and violent crimes (Generalitat de Catalunya 2016). After categorizing behavioral inputs and analyzing all 43 variables within a case, the model determines if the prisoner has a high, medium, or low risk of violent reoffense after release.

The establishment of

  • Europa Press (2009). Las cárceles catalanas evaluarán mejor el riesgo de reincidencia violenta de todos los presos, Europa Press. Available at:https://www.europapress.es/nacional/noticia-carceles-catalanas-evaluaran-mejor-riesgo-reincidencia-violenta-todos-presos-20090706150745.html [Accessed Aug 1 2018].
  • Generalitat de Catalunya (2016). Execució penal. Barcelona: Adams.
2021-08-03
SISBEN | Identifying welfare recipients
Colombian National Council on Social and Economic Policy
Colombia
N/A
social services
profiling and ranking people
public administration
socioeconomic discrimination

Yes. Scholars have encountered statistical support demonstrating the system’s production of both false positives (households that should not be included but are receiving subsidies) and false negatives (households that are in need but are excluded) as well that it discriminates against persons displaced by violence, of low-educational levels, and the elderly . Other authors also argue the system disproportionately denies minorities crucial access to subsidized health programs (Candelo, Gaviria, Polanía and Sethi, 2010).

1994-XX-XX
N/A

The Constitutional Court in Colombia has heard various cases “where individuals who have been classified erroneously argue that their rights and the principle of equality have been violated in their classification into the SISBEN indexing system” (Candelo, Gaviria, Polanía and Sethi, 2010).

not known

SISBEN is a composite welfare index used by the Colombian government to target groups for social programs. The program utilizes survey and financial data to categorize households across levels of need in order to allocate welfare services and government subsidies efficiently and fairly. While SISBEN searches for society’s most vulnerable in an attempt to achieve redistributive goals, there have been reports of discrimination and exclusion as a result of the program (Candelo, Gaviria, Polanía and Sethi, 2010). The constitutional court in Colombia has heard various cases “where individuals who have been classified erroneously argue that their rights and the principle of equality have been violated in their classification into the SISBEN indexing system” (ibid). Scholars have encountered statistical support demonstrating the system’s production of both false positives (households that should not be included but are receiving subsidies) and false negatives (households that are in need but

  • Cárdenas, Candelo, Gaviria, Polanía and Sethi (2016): “Discrimination in the Provision of Social Services to the Poor: A Field Experimental Study” In Discrimination in Latin America: An Economic Perspective. The Inter-American Development Bank and World Bank https://openknowledge.worldbank.org/bitstream/handle/10986/2694/52098.pdf;sequence=1
2021-08-03
ESECURITY | Predictive policing
Trento Police Department
Trento (Italy)
Italy
N/A
policing and security
profiling and ranking people
predicting human behaviour
public administration
threat to privacy
racial discrimination
socioeconomic discrimination

N/A

2015-XX-XX
N/A

N/A

not known

The Italian city of Trento has launched an ‘eSecurity’ initiative with the support of various backers and partners, including the European Commission, the Faculty of Law at the University of Trento, ICT Centre of Fondazione Bruno Kessler and the Trento Police Department (AlgorithmWatch, 2019). The project describes itself as “the first experimental laboratory of predictive urban security”, and subscribes to the criminological philosophy of ‘hot spots’ where “in any urban environment, crime and deviance concentrate in some areas (streets, squares, etc.) and that past victimization predicts future victimization” (ibid). The system is largely modeled after predictive policing platforms in the US and UK, and employs algorithms that learn to identify these criminal ‘hot spots’ from analyzing historical data sets. 

While Trento’s eSecurity initiative appears to be an objective method to predict and tackle crime, in turn, allocating police resources efficiently and promoting public safety, it

  • AW AlgorithmWatch gGmbH (2019): “Automating Society: Taking Stock of Automated Decision-Making in the EU”. Open Society Foundation, p.91.
  • O’Neil, C. (2018). Weapons of math destruction. [London]: Penguin Books.
2021-07-30
PREDPOL | Predictive policing
Uruguay's Ministry of Interior
Oakland Police Department
Kent's Police Force
Uruguay
Oakland (US)
Kent (UK)
United Kingdom (UK)
United States (US)
Geolitica (formerly PredPol)
policing and security
predicting human behaviour
public administration
socioeconomic discrimination
racial discrimination

Yes. In Kent (UK), the system went through an assessment after the first year in which it was deployed (PredPol operational review, 2014). However, the assessment didn't include any references to its social impact.

Later, PredPol was also studied by the Human Rights Data Analysis Group (HRDAG), which found that the algorithm was biased against neighborhoods inhabited primarily by low-income people and minorities (Lum 2016). They attribute such bias to the fact that most drug crimes were previously registered in these neighborhoods, thus police officers were directed by the algorithm to already over-policed communities. Because of this, HRDAG argues that the algorithm failed: it did not unlock data-driven insights into drug use previously unknown to police, rather it reinforced established inequalities surrounding drug policing in low-income and minority neighborhoods (ibid).

2013-XX-XX
N/A

N/A

not known

Near the end of 2013, Uruguay’s Ministry of the Interior acquired a license for a popular predictive policing software called PredPol. Predpol is a proprietary algorithm utilized by police forces around the world, including the Oakland Police Dept. in California and Kent’s Police Force in England. Trained from historical crime datasets, the software relies on a machine learning algorithm to analyze three variables (crime type, location, and date/time) in order to predict crime ‘hot spots’ (Ortiz Freuler and Iglesias 2018). From its analysis, Predpol creates custom maps that direct police attention to 150 square meter ‘hot spots’ where crime is statistically likely to occur. 

PredPol—and the field of predictive policing in general—has been criticized for its opacity and capacity for discrimination.

  • PredPol's site: https://www.predpol.com
  • Geolitica's site: https://geolitica.com
2021-07-30
DataWorks Plus facial recognition software
Michigan State Police (US)
Michigan (US)
United States (US)
DataWorks Plus
policing and security
identifying images of faces
profiling and ranking people
public administration
threat to privacy
socioeconomic discrimination
racial discrimination

N/A

N/A
N/A

N/A

not known

Amidst Black Lives Matter protests and calls for racial justice reform in the United States, the story of Robert Williams –a Black man wrongfully arrested at the fault of a facial recognition software– underscores how current technologies run the risk of exacerbating preexisting inequalities. 

After facial recognition software from the Michigan State Police matched Robert Williams’ face with that of a still image from a surveillance video catching a man stealing $3,800 USD of watches, officers from the Detroit Police Department arrived at Mr. Williams’ house to arrest him (Hill 2020). While Mr. Williams sat in the interrogation room knowing that he was innocent of the crime, he had no idea that his case would be the first known account in the United States of a wrongful arrest based on a facial recognition algorithm (ibid). 

  • Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects. By the National Institute of Standards and Technology (PDF file): https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf
2021-07-30
YouTube's content recommendation algorithm
YouTube (owned by Alphabet)
worldwide
YouTube (owned by Alphabet)
communication and media
predicting human behaviour
evaluating human behaviour
profiling and ranking people
making personalised recommendations
private sector
threat to privacy
social polarisation / radicalisation
disseminating misinformation

N/A

N/A
N/A

N/A

not known

Content recommendation algorithms, acting as information promoters and gatekeepers online, play an increasingly important role in shaping today’s society, politics, and culture. Given that YouTube is estimated to have the second most web traffic after Google and that 70% of the videos that YouTube users watch are suggested by its algorithm, it is safe to say that YouTube’s recommendation system commands much of the worlds’ attention (Hao 2019). 

From this position of power, the algorithm has amassed serious criticism, with critics asserting that the recommendation system leads users down rabbit holes of content and systematically exposes them towards extreme content (Roose 2019). Since YouTube’s algorithm is built to engage users and keep them on the platform, it often suggests content that users have already expressed interest in. The unintended but problematic repercussion of this feedback loop is “that users consistently migrate from milder to more extreme content” on the platform (

  • Roose, K (2019): “The Making of a Youtuve Radical” In New York Times https://www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html 
  • Lewis, R (2018): “Alternative Influence: Broadcasting the Reactionary Right on Youtube” Data & Society https://datasociety.net/wp-content/uploads/2018/09/DS_Alternative_Influence.pdf
2021-07-30
Algorithm to allocate social benefits
Trelleborg municipality (Sweden)
Trelleborg (Sweden)
Sweden
N/A
social services
automating tasks
profiling and ranking people
public administration
threat to privacy
socioeconomic discrimination

N/A

N/A
N/A

N/A

not known

The Swedish city of Trelleborg has turned to an automated decision-making system to allocate social benefits. The system checks and cross-checks benefit applications across various municipal databases (for example, the tax agency and unit for housing support). Given the system issues decisions automatically, the city has been able to considerably downsize its social benefits department, reducing its number of caseworkers from 11 to just 3 (AlgorithmWatch 2019). The municipality also has reported that the number of people receiving social benefits has substantially decreased since implementing the automated system. While the initiative has received various innovation prizes, applicants and citizens have been left uninformed regarding the automation process.


  • AW AlgorithmWatch gGmbH (2019): “Automating Society: Taking Stock of Automated Decision-Making in the EU”. Open Society Foundation, p.130.
2021-07-30
SARI | Automatic image recognition
Italy's National Police Force
Italy
N/A
policing and security
identifying images of faces
public administration
state surveillance

N/A

N/A
N/A

N/A

not known

The “S.A.R.I.” (Sistema Automatico di Riconoscimento Immagini), or Automated Image Recognition System, is an algorithmic facial recognition tool used by Italy’s national police force. The software has the capacity to process live footage and identify recorded subjects through a process of facial matching (AlgorithmWatch 2019). In 2018, SARI made headlines when it correctly identified two burglars in Brescia as a result of its algorithmic matching process. Despite being successful in that instance, questions have been raised regarding the accuracy of the software (the risks it poses to justice and safety due to a susceptibility to create false-positives and false-negatives), as well as cybersecurity and privacy guidelines it follows.

  • AW AlgorithmWatch gGmbH (2019): “Automating Society: Taking Stock of Automated Decision-Making in the EU”. Open Society Foundation, p.90.
2021-07-30
Algorithm to match children with schools
Poland's Ministry of Education
Poland
Asseco Data Systems
education and training
automating tasks
profiling and ranking people
public administration
threat to privacy
socioeconomic discrimination

N/A

N/A
N/A

N/A

not known

The Polish government has implemented decision-making algorithms that analyze a variety of factors in order to match children with schools. The algorithms consider factors like the “number of children, single-parent households, food allergies of children, handicaps, and material situation”(AlgorithmWatch 2019). One such system, Platforma Zarządzania Oświatą (Education Management Platform), built by Asseco Data Systems, is utilized in 20 Polish cities and has assigned students to more than 4,500 schools and preschools (ibid). The system centralizes a wide range of functions, managing “recruitment at various school levels (including pre-school recruitment), electronic diaries, student information management, attendance analysis, equipment inventory, issuing certificates and IDs, calculation and payment of student scholarships, school-parents communication, and school-organs, superior management of the organization of educational institutions (organization sheet, lesson plans, recruitment

  • AW AlgorithmWatch gGmbH (2019): “Automating Society: Taking Stock of Automated Decision-Making in the EU”. Open Society Foundation, p.108.
2021-07-30
Algorithm to award grades to students
Ofqual (England's exam regulator)
England (UK)
United Kingdom (UK)
N/A
education and training
evaluating human behaviour
public administration
socioeconomic discrimination

N/A

2020-XX-XX
N/A

N/A

not known

COVID-19 has forced school administrators around the globe to make dramatic alterations. In England, Education Secretary Gavin Williamson cancelled the summer exam series and announced that an algorithm would be awarding grades to students that would have been taking the GSCEs and A-levels exams (Lightfoot 2020). Ofqual, England’s exam regulator, explained that their algorithm would assign grades using a standardized model that examined a combination of variables, including a student’s “teacher assessment, class ranking and the past performance of their schools” (Adams 2020).

This decision has not been made without controversy. Not only have experts warned that a system dependent upon teacher evaluations could likely hurt already disadvantaged students, but parents and students alike have started a grassroot movement in opposition to the algorithmic system.

  • Lightfoot, L (2020): “‘Against Natural Justice’: father to sue exams regulator over A-level grades system”, https://www.theguardian.com/education/2020/jun/20/against-natural-justice-father-to-sue-exams-regulator-over-a-level-grades-system
  • Weale, S and Batty, D (2020): “Fears that cancelling exams will hit BAME and poor pupils worst”, https://www.theguardian.com/world/2020/mar/19/fears-that-cancelling-exams-will-hit-black-and-poor-pupils-worst
2021-07-30
Algorithm to predict risk of dropping out of school
Salta provincial government (Argentina)
Salta (Argentina)
Argentina
Microsoft
education and training
profiling and ranking people
evaluating human behaviour
predicting human behaviour
public administration
socioeconomic discrimination

N/A

N/A
N/A

N/A

not known

Local and federal governments in Argentina continue to invest in a future of algorithmic and databased governance. On the national level, the federal government has developed a database profiling citizens based on socioeconomic data in order to allocate social benefits more efficiently as well as a database that stores civilian biometric data to improve public safety and criminal investigations

In June 2017, the local government of the province of Salta partnered with Microsoft to create and deploy two different predictive tools optimized for identifying teenage pregnancies and school dropouts (Ortiz Freuler and Iglesias 2018). Trained from private datasets made available by the province’s Ministry of Early Childhood, the two systems identify those with the highest risk of teenage pregnancy and dropping out of school, alerting governmental agencies upon determining high-risk subjects. 

  • Kwet, M. (2019). Digital colonialism is threatening the Global South, Al Jazeera, https://www.aljazeera.com/indepth/opinion/digital-colonialism-threatening-global-south-190129140828809.html
  • Ortiz Freuler, J. and Iglesias, C. (2018). Algorithms and Artificial Intelligence in Latin America: A Study of Implementation by Governments in Argentina and Uruguay, World Wide Web Foundation.
2021-07-30
Algorithm to determine services to the unemployed
Spanish Public Employment Service
Spain
N/A
labour and employment
social services
profiling and ranking people
predicting human behaviour
public administration
socioeconomic discrimination

N/A

2012-XX-XX
N/A

N/A

not known

The Spanish Public Employment Service (SEPE) utilizes an automated platform in order to determine unemployment benefits and distribute employment opportunities and job training to the unemployed (AlgorithmWatch 2019). Since implementation, the amount of people receiving unemployment benefits has gone down by more than 50% (ibid). While it’s unsure if the automated system is entirely to blame for the denial of benefits, scholars argue that deficient data reconciliation, or system malfunction stemming from incorrect data and input error, is an inevitable and commonplace problem within algorithmic systems and is the likely culprit within the SEPE.

  • AW AlgorithmWatch gGmbH (2019): “Automating Society: Taking Stock of Automated Decision-Making in the EU”. Open Society Foundation, p.121.
2021-07-30
Algorithm to allocate housing to the homeless
Los Angeles municipality (US)
Los Angeles (US)
United States (US)
N/A
social services
predicting human behaviour
profiling and ranking people
public administration
socioeconomic discrimination
threat to privacy

N/A

2013-XX-XX
N/A

N/A

not known

The City of Los Angeles developed a Coordinated Entry System (CES) to allocate housing to homeless populations in 2013. CES is based on the effective housing-first approach, which first aims to get a roof over the head of homeless persons, and then provide assistance in other ways. First, the homeless are asked to fill out a survey, which gathers their information and organizes it into a database. Then, an algorithm ranks the cases on a “vulnerability index” so that those who need housing the most will be helped first (Misra, 2018).

While there is an argument to be made for CES’ prioritization principle (there are 58,000 unhoused people in Los Angeles County alone, and there is not currently enough housing resources for everyone), the compulsory survey has been found to ask private and even intentionally criminalizing questions regarding sensitive behavior. For example, it asks: “if you are having sex without protection; if you’re trading sex for money or drugs; if you’re thinking of h

  • Misra, T (2018): “The Rise of Digital Poorhouses” In CityLab https://www.citylab.com/equity/2018/02/the-rise-of-digital-poorhouses/552161/ 
  • For more information: Chiel, E (2018): “The Injustice of Algorithms” In The New Republic https://newrepublic.com/article/146710/injustice-algorithms
2021-07-30
Algorithm to screen medical school applicants
St. George's Hospital Medical School (London, UK)
London (UK)
United Kingdom (UK)
N/A
education and training
profiling and ranking people
private sector
socioeconomic discrimination
gender discrimination
racial discrimination

N/A

1970-XX-XX
1988-XX-XX

The U.K.’s Commission for Racial Equality found St. George’s Medical School guilty of practising racial and sexual discrimination in its admissions process in 1988.

no longer in use

One of the first cases of algorithmic bias took place in the 1970s at St. George’s Hospital Medical School in the United Kingdom. Hoping to make the application process more efficient and less burdensome on the administration, St. George’s deployed a computer program to do initial screenings of applicants. The program was trained from a sample data set of past screenings, analysing which applicants had been historically accepted to the medical school. Due to learning from this data set, the program would go on to deny interviews to as many as 60 applicants because they were female or had names that did not sound European (Garcia 2017). In 1988, the United Kingdom’s Commission for Racial Equality charged St. George’s Medical School for practising racial and sexual discrimination throughout its admissions process (ibid). While the St. George’s had no intention of committing racial and sexual discrimination, its new computer program had learned from a structurally biased admissions proces

  • Garcia, M (2017): “Racist in the Machine: The Disturbing Implications of Algorithmic Bias” In World Policy Journal https://muse.jhu.edu/article/645268/pdf
2021-07-30
REKOGNITION | Facial recognition
Amazon
Orlando Police Department (US)
worldwide
Orlando (US)
United States (US)
Amazon
policing and security
identifying images of faces
public administration
private sector
threat to privacy
racial discrimination
socioeconomic discrimination
state surveillance

Yes. A study published by the MIT Media Lab in early 2019 found that Rekognition, Amazon’s facial recognition system, performed substantially worse when identifying an individual’s gender if they were female or darker-skinned. Rekognition committed zero errors when identifying the gender of lighter-skinned men, but it confused women for men 19% of the time and mistook darker-skinned women for men 31% of the time (Raji and Buolamwini 2019). Similarly, a previous test conducted by the ACLU found that while scanning pictures of members of Congress, Rekognition falsely matched 28 individuals with police mugshots (Cagle and Ozer 2018).

2017-XX-XX
N/A

N/A

not known

Facial recognition algorithms have proven their inefficacy when it comes to classifying people that are not white males. A study published by the MIT Media Lab in early 2019 found that Rekognition, Amazon’s facial recognition system, performed substantially worse when identifying an individual’s gender if they were female or darker-skinned. Rekognition committed zero errors when identifying the gender of lighter-skinned men, but it confused women for men 19% of the time and mistook darker-skinned women for men 31% of the time (Raji and Buolamwini 2019). Similarly, a previous test conducted by the ACLU found that while scanning pictures of members of Congress, Rekognition falsely matched 28 individuals with police mugshots (Cagle and Ozer 2018).

Shortly after Buolamwini’s report was published, Microsoft, IBM and the Chinese firm Megvii vowed to improve their facial recognition software whereas Amazon, by comparison, denied that the research suggested anything about the performance of it

  • Vincent, J (2019): “Gender and racial bias found in Amazon’s facial recognition technology (again)”. In The Verge https://www.theverge.com/2019/1/25/18197137/amazon-rekognition-facial-recognition-bias-race-gender 
  • Raji, I.D; Buolamwini, J (2019): “Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products” In Association for the Advancement of Artificial Intelligence. http://www.aies-conference.com/wp-content/uploads/2019/01/AIES-19_paper_223.pdf 
2021-07-30
Algorithm to match judges with cases
Polish Ministry of Justice
Poland
N/A
justice and democratic processes
automating tasks
public administration
socioeconomic discrimination

N/A

2017-XX-XX
N/A

N/A

not known

In 2018, the Polish Ministry of Justice implemented the “System of Random Allocation of Cases” (or the System Losowego Przydziału Spraw), an algorithmic system that randomly matches judges with cases across the judicial system (Algorithm Watch 2019).

The initiative has faced intense public scrutiny since first being piloted in three Polish cities in 2017. While no quantitative investigations have been done over the topic, qualitative evidence suggests that the system’s process of matching cases is not random (ibid). Watchdog groups and activiststs raise the contention that the system’s randomness is impossible to verify given the opacity of the algorithm. While the Ministry has released documents explaining how the algorithm operates, it refuses to release the source code to the public, declining a 2017 freedom of information request by NGOs ePaństwo and Watchdog Polska to divulge the details of the system (ibid). 

  • AW AlgorithmWatch gGmbH (2019): “Automating Society: Taking Stock of Automated Decision-Making in the EU”. Open Society Foundation, p.100.
2021-07-30
Utopia Analytics online content moderation algorithm
tutti.ch (Swiss peer-to-peer sales platform)
Suomi24 (Finnish public forum)
Switzerland
Finland
Utopia Analytics
communication and media
business and commerce
evaluating human behaviour
automating tasks
private sector
disseminating misinformation
social polarisation / radicalisation

N/A

2016-XX-XX
N/A

N/A

not known

The Finnish startup Utopia Analytics trains machine intelligence to do content moderation on social media platforms, discussion forums, and e-commerce sites (Algorithm Watch, 2019). The firm hopes to “bring democracy back into social media” by encouraging safe community dialogue (ibid). Utopia Analytics sells an AI (specifically, a neural network) that is first trained by humans from a body of sample content and then is left to moderate on its own. Human moderators retain the ability to make decisions in complicated and controversial scenarios. 

Despite the service’s implementation on various sites like Swiss peer-to-peer sales platform tutti.ch and Finnish public forum Suomi24, Mark Zuckerberg has denounced Utopia Analytic’s technology as primitive and inaccurate. He claims that automated content moderation is at least 5-10 years away due to the significant progress needed to be made before for machine intelligence can comprehend the subtleties and situational context inherent to crac

  • AW AlgorithmWatch gGmbH (2019): “Automating Society: Taking Stock of Automated Decision-Making in the EU”. Open Society Foundation, p.63.
2021-07-30
Palantir policing algorithm
Danish Police
Danish Intelligence Services
Denmark
Palantir
policing and security
automating tasks
predicting human behaviour
public administration
state surveillance
threat to privacy

N/A

N/A
N/A

N/A

not known

In 2016 documents surfaced evincing that the Danish police and the Danish intelligence service had bought policing software from the US firm Palantir (AlgorithmWatch 2019). The system sources information from various diverse data silos, centralizing “document and case handling systems, investigation support systems, forensic and mobile forensic systems, as well as different types of acquiring systems such as open source acquisition, and information exchange between external police bodies” (ibid). While originally adopted as an anti-terrorism measure, experts theorize that the software will undoubtedly serve as the foundation for predictive policing initiatives (ibid). Two years prior, the Danish police implemented an automatic license plate control system that was mounted on police cars and scanned license plates in order to identify persons of interest in real-time. Human rights advocates have called for more public oversight and a general transparency in regard to the surveillance sy

  • AW AlgorithmWatch gGmbH (2019): “Automating Society: Taking Stock of Automated Decision-Making in the EU”. Open Society Foundation, p.52.
2021-07-30
IBORDERCONTROL | Automatic lie detector
EU border control agencies
European Union (EU)
European Dynamics
policing and security
profiling and ranking people
evaluating human behaviour
predicting human behaviour
public administration
threat to privacy
racial discrimination
state surveillance

N/A

2019-XX-XX
N/A

N/A

not known

iBorderCtrl, software developed by the firm European Dynamics, aims to help border security screen non-EU nationals at EU borders (AlgorithmWatch 2019). By using “cheating biomarkers”, the system determines whether a person is lying during an “automated interview” using a virtual border guard system that tracks facial and body movements. If iBorderCtrl suspects the person to be lying, the system asks for biometric information and prompts a personal interview. iBorderCtrl’s purpose is “to reduce the subjective control and workload of human agents and to increase the objective control with automated means that are noninvasive and do not add to the time the traveler has to spend at the border” (ibid). That being said, the system still relies on a “human-in-the-loop principle” where real border security agents are monitoring the process. 

Scholars argue that iBorderCtrl processes sensitive information while falling into a pseudo-scientific trap: detecting a lie requires complex psychologic

  • AW AlgorithmWatch gGmbH (2019): “Automating Society: Taking Stock of Automated Decision-Making in the EU”. Open Society Foundation, p.43.
  • ActuIA (2019): “iBorderCtrl : a dangerous misunderstanding of what AI really is” In ActuIA https://www.actuia.com/english/iborderctrl-a-dangerous-misunderstanding-of-what-ai-really-is/
2021-07-30
Algorithm to profile public employment applicants
Flemish public employment service (VDAB) (Belgium)
Flanders (Belgium)
Belgium
N/A
labour and employment
profiling and ranking people
evaluating human behaviour
public administration
socioeconomic discrimination

N/A

2017-XX-XX
N/A

N/A

not known

Amidst government plans to invest €30 million in AI infrastructure, the Flemish government has created an algorithm to streamline the process of job applicants using the public employment service (VDAB) (AlgorithmWatch 2019). The system creates profiles of job applicants based on their online behavioral data, predicting their interests and job suitability. Job seeker’s motivation to secure employment is tracked by the algorithm based on information regarding their activity, communication response rate, and frequency of usage of the VDAB (ibid). 

The system has faced criticism regarding its implications for the privacy of job seekers as well as exacerbating pre-existing inequalities. Caroline Copers, the general secretary of the Flemish wing of ABVV, argues that the job seekers who require the most assistance from the public employment service of Flanders will be disadvantaged by the algorithmic system (ibid). Because this demographic has lower rates of digital literacy and computer usa

  • AW AlgorithmWatch gGmbH (2019): “Automating Society: Taking Stock of Automated Decision-Making in the EU”. Open Society Foundation, p.43.
  • Lesaffer, P (2017): “Unemployed? Through this new way, the VDAB wants to check whether you are actively looking for a new job”. In the Guardian 
2021-07-30
152 records

Alert

Lorem ipsum
Okay