Fall of an Algorithm Case Database
Copy base
Report abuse
Views
Name
1
Microsoft's Tay Bot
2
NEDA Tessa Bot
3
Meta's Galactica
4
Scatter Lab's Lee-Luda Chatbot
5
Oregon DHS's Safety at Screening Tool
6
Allegheny County Family Screening Tool (AFST)
7
Chile Local Childhood Office's Sistema Alerta Niñez
8
New Zealand's Vulnerable Children Predictive Risk Modeling
9
Teen Pregnancy Risk Prediction in Argentina
10
VioGén Gender Violence Recidivism Risk Assessment
11
Twitter's Image Cropping Algorithm
12
Zoom's Virtual Background
13
Amazon's AI Recruiting Tool
14
Arkansas' RUGS Home Care
15
Idaho, K.W. vs. Armstrong
16
Missouri's NF LOC Home Care
17
Takaful Cash Transfer Program in Jordan
18
Netherlands' SyRI: System Risk Indication
19
EPIC Sepsis Prediction Tool
20
HHS/UNOS Organ Allocation Algorithm
21
UnitedHealth and NaviHealth's nH Predict
22
Obermeyer et al's audit of a healthcare management algorithm
23
New Zealand's Equity Adjuster Waitlist
24
RealPage's YieldStar
25
UK GCE Grading Algorithm
26
SAS's EVAAS in Houston
27
San Diego County Facial Recognition Tool (TACIDS)
28
NYPD using Clearview AI
29
Detroit Project Green Light + DataWorks Plus
30
City of Somerville's Face Surveillance Full Ban Ordinance
31
Facial Recognition in NYC Atlantic Plaza Towers
32
Rite Aid's Enrollment Database for Surveillance Systems
33
Hyderabad's #BanTheScan Campaign
34
São Paulo's Metro Intelligent Security System
35
Chinese Police Surveillance in the Xinjiang Uyghur Autonomous Region
36
New Zealand's Identity Check Program
37
Uganda's Intelligent Transport Monitoring System
38
UCLA PredPol
39
City of Pittsburgh + metro21's predictive policing pilot
40
Germany's HessenDATA
Drag to adjust the number of frozen columns
Abandoned (as of 12/2023)?
Developed in which region(s)?
Deployed in which region(s)?
Domain
Name of the organization overseeing the algorithm's deployment
Name of any organizations involved with developing the algorithm (if different from deploying organization)
Was the algorithm ever deployed?
Year it was deployed (blank if never deployed)
Year it was abandoned
Summary
Time elapsed from release to discovery (start of critique)
Time elapsed from discovery (start of critique) to abandonment
Reference list
Yes, model owner's choice
USA
Global deployment
Chatbot
Microsoft

-

Yes
2016
2016
Microsoft's Tay chatbot began tweeting offensive content after interacting with users who wanted it to do so.
<1 month
<1 month
  • AIID page: https://incidentdatabase.ai/cite/6/#r1374
Yes, model owner's choice
USA
Global deployment
Chatbot
The National Eating Disorder Association

Cass, University of Wasington Researchers

Yes
2022
2023
Shortly NEDA fired all of their hotline staff (following their decision to unionize), eating disorder activists posted on social media about how Tessa (NEDA's chatbot) gave harmful and triggering advice.
1+ year
<1 month
  • https://www.npr.org/2023/05/24/1177847298/can-a-chatbot-help-people-with-eating-disorders-as-well-as-another-human  
Other/complicated
Repaired
USA
Global deployment
Chatbot
Meta

-

Yes
2022
2022
Within three days of releasing a public demo of their Galactica language model, Meta took down the public demo and re-branded Galactica as "available for researchers" only.
<1 month
<1 month
  • https://galactica.org 
Yes, model owner's choice
South Korea
South Korea
Chatbot
Scatter Lab

-

Yes
2020
2021
Seoul-based startup's AI-chatbot "Lee-Luda" was removed from Facebook messenger 20 days upon launch after users complained it was using offensive language, discriminatory and generated hate speech towards LGBTQ folks and people with disabilities.
<1 month
<1 month
  • https://incidentdatabase.ai/cite/106/
Yes, model owner's choice
USA
USA
Child welfare
Oregon Department of Human Services

-

Yes
2018
2022
Oregon DHS decided to stop using an "AI"-based screening tool (which is modeled after the AFST), in the wake of an April AP news article reporting that these tools have the ability to "heighten racial disparities in the child welfare system".
Unknown to coders
  • https://incidentdatabase.ai/cite/238/#r1762
No
Repaired
New Zealand
USA
USA
Child welfare
Allegheny County Department of Human Services

Team of researchers from Auckland University

Yes
2016
Many external researchers, journalists, and advocacy groups have raised concerns about the AFST, including that it is more likely to investigate (and ultimately deem negligent) (1) poor families, (2) Black families, and (3) people with disabilities.
Unknown to coders
  • https://www.alleghenycounty.us/Human-Services/News-Events/Accomplishments/Allegheny-Family-Screening-Tool.aspx
No
Chile
Chile
Child welfare
Chile Ministry of Social Development and Family

Researchers at the Centre for Social Data Analytics (CSDA) at the Auckland University of Technology and the Public Innovation Laboratory (GobLab) at the Universidad Adolfo Ibáñez in Chile

Yes
2019
Chile’s Ministry of Social Development and Family launched the program to strengthen childhood protection with an algorithm that provided risk scores used to determine necessary intervention, but rejected an independent external audit’s results revealing the system wrongly analyzed variables associated with direct measures of poverty rather than safety, targeting and perpetuating over- intervention in poor neighborhoods.
Unknown to coders
  • https://notmy.ai/news/case-study-a-childhood-alert-system-sistema-alerta-ninez-san-chile/
Yes, through existing governing bodies
New Zealand
New Zealand
Child welfare
New Zealand Government

New Zealand Ministry of Social Development with researchers from Auckland University

No
2015
New Zealand’s Social Development Minister intervened and halted the deployment of the algorithm after seeing major concerns from additional research during the request for its ethics approval for further study.
N/A (never released/deployed)
<1 month
  • https://notmy.ai/news/case-study-a-childhood-alert-system-sistema-alerta-ninez-san-chile/
Unknown
USA
Argentina
Argentina
Brazil
Family and social services
Salta City Government

Microsoft, Conin Foundation

Yes
2018
The Applied Artificial Intelligence Laboratory (LIAA) in the University of Buenos Aires published concerns in a detailed technical analysis and review of an algorithm developed through a partnership between Microsoft and province of Salta’s Ministry of Early Childhood to predict teenage pregnancy.
<1 year
  • https://news.microsoft.com/es-xl/microsoft-gobierno-salta-firman-acuerdo-aplicar-la-inteligencia-artificial-la-prevencion-los-problemas-mas-urgentes/
No
Spain
Spain
Family and social services
Spanish Ministry of the Interior (Secretary of State for Security)

-

Yes
2007
The Eticas Foundation partnered with victims to publish an external audit after the Spanish Ministry declined requests of auditing the algorithm that underestimated level of risk in domestic and intimate partner violence cases, leading to adverse outcomes.
<1 year
  • https://eucpn.org/document/viogen
Yes, model owner's choice
USA
Global deployment
Online platforms
Twitter

-

Yes
2018
2021
Users hypothesized that an algorithm that cropped images to show feed previews compatible with device aspect ratios noticed that it was more likely to crop out Black people (or other minority groups, like women) in images with multiple people.
1+ year
<1 year
  • AIID page: https://incidentdatabase.ai/cite/103/#r2241
No
USA
Global deployment
Online platforms
Zoom

-

Yes
2016
Zoom's virtual background feature is more likely to be dysfunctional (i.e., hide the body of or include parts of the background) for users with darker skin.
1+ year
  • Original Tweet by Colin Madland, https://twitter.com/colinmadland/status/1307111816250748933 
Yes, model owner's choice
UK
Global deployment
Employment
Amazon

-

No
2014
2017
Amazon disbanded an internal team whose goal was to train ML to screen resumes.
N/A (never released/deployed)
Discovered after abandoned
  • AIID page: https://incidentdatabase.ai/cite/37/#r610
Yes, in a court
USA
USA
Benefit allocation
Arkansas Department of Human Services

InterRAI, C.H. Mack

Yes
2016
2018
Arkansas switched to using a new algorithm called "RUGS" to assess the hours of in-home care that residents are eligible for. The algorithm resulted in severe cuts to the number of hours allocated to most residents.
<1 year
1+ year
  • https://arknews.org/index.php/2018/05/30/archoices-rule-blocked/
Yes, in a court
USA
USA
Benefit allocation
Idaho Department of Health and Welfare (DHW)
Yes
2011
2016
The Idaho Department of Health and Welfare used an algorithm to assess individual MedicAid budgets available to people with disabilities in their home-based services waiver program. A class-action lawsuit, KW vs. Armstrong, resulted in the algorithm being made public, and the algorithm was deemed unconstitutional and "arbitrary". The case was settled so that people whose budgets were cut would continue to get their original budgets.
<1 year
1+ year
  • https://scholarship.law.bu.edu/cgi/viewcontent.cgi?article=1835&context=faculty_scholarship
Yes, model owner's choice
Other/complicated
USA
USA
Benefit allocation
Missouri Department of Health and Human Services

Developed in-house, "used InterRAI variables"

No
2021
The State of Missouri released (in advance) their plan to switch to using an algorithm called "NF LOC" to assess eligibility for home-care services. They released the algorithm for public comment, and external researchers found that it would result in severe cuts for a large group of residents (and also had other logical errors).
N/A (never released/deployed)
<1 year
  • https://www.btah.org/case-study/missouri-medicaid-home-and-community-based-services-eligibility-issues.html
No
Repaired
Unclear/unknown
Jordan
Benefit allocation
Jordan's National Aid Fund

The World Bank

Yes
2019
Human Rights Watch published an investigation report that revealed how the World Bank’s algorithm for distributing and ranking financial assistance through a cash transfer program spearheaded by Jordan government’s social protection agency was wrongfully disqualifying those in need. The algorithm used 57 factors that they declined to publicly disclose.
<1 year
  • https://thedocs.worldbank.org/en/doc/fa60c0d7ec9c896c0f44eb7112b6f005-0280012023/original/Jordan-National-Aid-Fund-NAF-Cash-Transfer-Program-Fact-Sheet.pdf
Yes, in a court
The Netherlands
The Netherlands
Benefit allocation
Dutch Ministry of Social Affairs

Accenture

Yes
2018
2021
Local civil rights organizations filed a case against the state to stop using an algorithm to detect welfare benefits fraud. A Dutch court ruled that the algorithm violated the European Convention on Human Rights (ECHR) protections and halted its operation.
1+ year
1+ year
  • https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/
No
Repaired
USA
USA
Healthcare
Hospitals that use EPIC electronic medical record software

EPIC

No
2015
Epic, an American healthcare software company (and prominent EHR vendor), launched the Epic Sepsis Model (ESM) in 2017, a proprietary linear model that alerts patients at high risk of sepsis. From 2021 onward, external and independent audits or investigations showed that the model was flawed; e.g., it had lower AUC than reported, had a high false alarm rate, etc.
1+ year
  • StatNews, "Epic's overhaul of a flawed algorithm shows why AI oversight is a life or death issue". link
Yes, through existing governing bodies
Yes, model owner's choice
Other/complicated
USA
USA
Healthcare
HHS, UNOS

-

Yes
2020
2023
A joint investigation by the Washington Post and The Markup exposed how a new shift in the UNOS allocation algorithm's criteria in removing distance as a factor for organ donation and prioritizing sickest patients regardless of location, drastically decreased the number of transplant surgeries and increased deaths in poorer states. The day after the investigation, HHS said they would break up the contract and change their allocation process.
1+ year
<1 month
  • https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know
Unknown
USA
USA
Healthcare
UnitedHealth

NaviHealth

Yes
2020
Families of deceased beneficiaries filed a lawsuit in the U.S. District Court for the District of Minnesota against a insurance company's healthcare algorithm for determining post-acute care. The algorithm prematurely discharged patients from facilities against doctor recommendations and discontinued payment for necessary services and claims, targeting those on Medicaid.
1+ year
  • https://www.reuters.com/legal/lawsuit-claims-unitedhealth-ai-wrongfully-denies-elderly-extended-care-2023-11-14/
Unknown
Repaired
USA
USA
Healthcare
An anonymous insurance company

-

Yes
Researchers at the UC Berkeley School of Public Health exploited a rich dataset that provided insight into a live, scaled algorithm that had been deployed nationwide, and found it perpetuated significant racial bias through using health care costs to determine who was most likely to benefit from care management programs.
1+ year
  • https://www.science.org/doi/full/10.1126/science.aax2342 
No
New Zealand
New Zealand
Healthcare
Health New Zealand (Te Whatu Ora)

Health New Zealand (Te Whatu Ora)

Yes
2020
The ACT New Zealand political party and other clinicians started a petition against Auckland Health Minister's efforts to make healthcare more equitable for low-income, rural, and indigenous Maori and Pasifika communities through an algorithm that adjusted patients' position in elective surgery waitlists, claiming it was racial discrimination against other New Zealanders.
<1 year
  • https://apnews.com/article/new-zealand-surgery-ethnicity-algorithm-maori-1b44026f2661772a7eb3bd4444619446
No
USA
USA
Property tech
Landlords/property management companies

RealPage

Yes
2004
RealPage, a Texas-based company that provides property management software for landlords, developed an algorithm, YieldStar, that suggests daily prices for open rental units. Critics claim that the software brings anti-trust concerns and has resulted in rent inflation across the U.S.
1+ year
  • Propublica article: "Rent going up? - How a Secret Rent Algorithm Pushes Rents Higher" (broke October 2022)
Yes, model owner's choice
UK
UK
Canada
India
Australia
Nigeria
Education
Ofqual

-

Yes
2020
2020
In the 2020 pandemic, UK-based boards decided to use an algorithm to grade students' GCE A-level exams (international certificates which are important for university placements and scholarship opportunities). Worldwide protests broke out when the algorithmically calculated results were announced, which resulted in the grades being globally rescinded and re-assigned one week later.
<1 month
<1 month
  • https://novaramedia.com/2020/08/17/fuck-the-algorithm-how-a-level-students-have-shown-future-of-protest/
Yes, model owner's choice
Yes, in a court
USA
USA
Education
Houston Independent School District

SAS Institute

Yes
2011
2016
Houston Independent School District agreed in the settlement of a lawsuit brought by Houston and the American Federation of Teachers, to terminate the use of EVAAS, which used a student’s performance on prior standardized tests to predict academic growth in the current year, as basis for evaluating teacher effectiveness.
<1 year
1+ year
  • https://www.aft.org/press-release/federal-suit-settlement-end-value-added-measures-teacher-termination-houston
Yes, through new policy
Germany
USA
USA
Facial recognition
ARJIS

FaceFirst + Cognitec (FaceVACS)

Yes
2007
2019
San Diego County had a regional database known as the Tactical Idenfication System (TACIDS), which contained 1.8million booking photos and was used by 30 law enforcement agencies (including ICE). Use of the database was ruled illegal by CA state AB 1215.
<1 year
1+ year
  • https://voiceofsandiego.org/wp-content/uploads/2021/04/TACIDS-Final-Report-FINAL.pdf
Yes, through new policy
Yes, model owner's choice
USA
USA
Facial recognition
NYPD

Clearview

Yes
2018
2020
Released public records revealed that the NYPD had a formal relationship with facial recognition company Clearview AI, including a "trial" vendor agreement, despite the PD's remarks that they had never had a relationship with the vendor. The NYPD released a formal facial recognition policy that banned use of Clearview in spring of 2020.
1+ year
<1 year
  • https://www.buzzfeednews.com/article/carolinehaskins1/nypd-has-misled-public-about-clearview-ai-use
No
Other/complicated
Yes, through new policy
USA
USA
Facial recognition
Detroit Police Department

DataWorks Plus; who purchased AI software from NEC/Rank One

Yes
2017
The Detroit PD signed a contract with a for-profit vendor (DataWorks Plus) to begin running facial recognition on live video streams from surveillance cameras (from the "Project Green Light" initiative), and other surveillance cameras (e.g., drones). While using FR on live camera streams has since been disallowed by the city's FR policy, the Detroit PD continues to contract with the vendor to investigate violent crimes.
1+ year
1+ year
  • https://www.wxyz.com/news/region/detroit/detroits-facial-recognition-contract-set-to-expire-heres-a-look-back-at-the-last-3-years
Yes, through new policy
USA
Facial recognition
N/A

-

No
2019
A unanimous (11-0) vote by Somerville City Council passed an ordinance to ban all use of FR technology by city employees (including, but not limited to, the city of Somerville Police).
N/A (never released/deployed)
  • https://library.municode.com/ma/somerville/codes/code_of_ordinances?nodeId=PTIICOOR_CH9OFMIPR_ARTIIIOFAGPE_DIV1GE_S9-25BAUSFARESUTE
Yes, model owner's choice
Unclear/unknown
USA
Facial recognition
Nelson Management Group

Unknown vendor

No
2019
Residents of a Brooklyn apartment complex partnered with a legal group to resist their landlord's plan to install facial recognition technology as a means to access the building.
N/A (never released/deployed)
<1 year
  • https://www.nbcnewyork.com/news/local/residents-brooklyn-building-fight-landlord-installing-face-recognition/1544349/
Yes, through existing governing bodies
Yes, model owner's choice
USA
China
USA
Facial recognition
Rite Aid

Multiple vendors (FaceFirst, DeepCam)

Yes
2012
2023
Rite Aid, a large U.S. drugstore chain, deployed facial recognition surveillance systems in over 200 stores. Employees used the FR to surveil customers (e.g., "persons of interest" who might attempt to engage in criminal activity). Rite Aid stopped using FR after a Reuters investigation broke in 2020, and in 2023 were issued a five-year ban on their use of FR by the FTC.
1+ year
1+ year
  • https://www.ftc.gov/news-events/news/press-releases/2023/12/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-without
No
Unclear/unknown
India
Facial recognition
Telangana law enforcement, Hyderabad police

Unknown vendor(s)

Yes
2018
India's Internet Freedom Foundation partnered with Amnesty International in kickstarting the #BantheScan campaign globally and locally to resist law enforcement use of facial recognition.
Unknown to coders
1+ year
  • Ban The Scan Hyderabad
Yes, in a court
Other/complicated
Repaired
USA
Brazil
Facial recognition
Companhia do Metropolitano de São Paulo (METRO)

ISS Corp

Yes
2020
2022
The city judge of São Paulo ruled that the proposed facial reconigtion system did not meet the legal requirements under Brazil's General Data Protection Law (LGPD) in a lawsuit brought by the public defender and multiple human rights and civil advocacy organizations. However, METRO and ISS quickly turned around to deploy a new facial recognition solution that they claimed was compliant within the year.
<1 year
<1 month
  • São Paulo subway ordered to suspend use of facial recognition | ZDNET
No
China
Facial recognition
China Ministry of Public Security

SenseTime, CloudWalk, Yitu, Megvii, other vendors

Yes
2018
The Office of the United Nations High Commissioner for Human Rights published an assessment of concerns against police use of facial recognition technology to surveil and track Uyghur people across the Xinjiang region in China.
<1 year
  • China Has Created a Racist A.I. to Track Muslims | Vanity Fair
No
USA
New Zealand
New Zealand
Facial recognition
New Zealand Internal Affairs; Digital Identity New Zealand

Daon

Yes
2023
Data specialists and ethicists from New Zealand's indigenous Maori communities publicly commented and accused the government of ignoring their requests of involvement in improving their digital ID facial recognition technology with a reported 45% failure rate.
<1 month
  • https://www.dia.govt.nz/identity-check
No
China
Russia
Uganda
Facial recognition
Uganda Ministry of Works and Transportation

Huawei, Joint Stock Company Global Security

Yes
2019
Human right organizations, local politicians, and other activists spoke out against the multi-purpose technology that will monitor traffic and public spaces by electronic license plate recognition, other high-density camera surveillance, and facial recognition for tracking vehicle locations for criminal activity and investigations, political opponents, and protests.
<1 month
  • https://infrastructure.go.ug/intelligent-transport-monitoring-system-itms/
Yes, model owner's choice
Other/complicated
USA
USA
Predictive policing
Los Angeles Police Department (among other PDs across the US)

PredPol, Inc.

Yes
2011
2019
The LAPD began a 2011 contract with PredPol, Inc (which began as a research project with a UCLA professor), which they used for 9 years until the contract was not renewed in 2019. A data leak showed that their algorithms disproportionately targeted Black, brown, and poor neighborhoods.
1+ year
1+ year
  • Guardian article, November 2021
Yes, through new policy
Yes, model owner's choice
USA
USA
Predictive policing
City of Pittsburgh

Carnegie Mellon's metro21 Smart Cities Institute

Yes
2016
2020
Pittsburgh City Council voted to ban "obtaining, retaining, accessing, or using FR or predictive policing technology" after facing criticism from student activists for a predictive policing experiment in collaboration with researchers at Carnegie Mellon (Metro21).
1+ year
1+ year
  • https://capp-pgh.com/files/Primer_v1.pdf
Yes, in a court
USA
Germany
Predictive policing
Hessen Police

Palantir

Yes
2017
2023
In a case brought by the brought by the German Society for Civil Rights (GFF), the German Federal Constitutional Court ruled that a predictive policing algorithm developed by U.S. company Palantir was unconstitutional on the basis of the right to informational self-determination under the country's fundamental privacy rights.
Unknown to coders
1+ year
  • Bundesverfassungsgericht - Press - Legislation in Hesse and Hamburg regarding automated data analysis for the prevention of criminal acts is unconstitutional
40 cases
  • Grid view

Alert

Lorem ipsum
Okay