Yes, there was a study from the NYC Independent Budget Office (NYC Independent Budget Office, 2016)
N/A
The New York City Department of Education has deployed algorithms in order to process student admissions into New York City’s public high school system. The system in place utilizes a multipurpose algorithm (called the ‘Gale-Shapley’ algorithm) that matches students with schools after evaluating a variety of criteria (Herold 2013). It processes a set of information from students and their parents, including a rank-ordered list of schools they prefer, and then institutional data, like certain qualities about each school and admissions rules (ibid). Various questions concerning the system have been raised. Firstly, on account of the algorithm’s metrics, students have a rather low probability to be admitted into one of their preferred public high schools in the city (Abrams 2017). Secondly, researchers have found evidence that the algorithm disproportionately matches lower-income students with lower-performing schools (Nathanson, Corcoran and Baker-Smith 2013). And, despite parents being
N/A
N/A
The University of Chicago developed a predictive policing algorithm which divides cities into 100 square-foot tiles and uses place based predictions to determine the likelihood of crime occurring (Andrew Guthrie Ferguson, 2017). The algorithm uses historical crime data to detect patterns in two particular areas of crime: violent crimes and property crimes. These categories were identified due to a high likelihood of reporting and are less prone to enforcement bias (Lau, 2020). There are concerns about the algorithm, in particular a propensity to re-criminalize former convicts along lines which have historically been shown to be racially biased(York, 2022).
N/A
N/A
Mortgage payments are increasingly being determined by algorithms. Freddie Mac and Fannie May are loan guarantors who require lenders to use a specific credit scoring algorithm which determines if an applicant has reached the minimum credit threshold for approval. The algorithm was created in 2005 and used data going back to the 1990s. The algorithm is considered detrimental to people of color due to systemic discrimination against them when it comes to lending practices.
N/A
N/A
Los Angeles County employs an algorithm to address the mismatch between housing supply and demand. The system is based on two principles: prioritization and housing first. The system is intended to put individuals and families into apartments as quickly as possible and thereafter offers supportive services. The system collects personal information such as birth dates, demographic information, immigration and residency status and birth dates. The information is used to rank a person on a scale from 1-17 with 1 being low risk and 17 being high risk. Those scoring between 0 and 3 are judged to need no housing intervention. Those scoring between 4 and 7 qualify to be assessed for limited-term rental subsidies and some case management services —an intervention strategy called rapid re-housing. Those scoring 8 and above qualify to be assessed for permanent supportive housing. Though it is beneficial, there are several concerns around the extensive data collected and the possible surveillance
Yes. In 2018 the Polish Constitutional Tribunal decided that the system needed to be better secured in a legislative act.
Yes. In 2018 the Polish Constitutional Tribunal decided that the system needed to be better secured in a legislative act.
The Polish Ministry of Labor and Social Policy has deployed an algorithm that profiles unemployed citizens and categorizes them according to types of welfare assistance (Algorithm Watch 2019). The system asks 24 questions that determine two criteria: “distance from the labour market” and “readiness to enter or return to the labour market” (ibid). The system has been criticized for its opaque operations regarding the availability of public services and its arbitrary nature of decision-making due to the simplification of the person’s case interview. Moreover, the algorithm does not give its subjects access to see its internal operations nor an explanation regarding the decision that has been made about them (ibid). The system also acts almost entirely unilaterally; The Polish Ministry of Labor and Social Policy has limited oversight over the automated process. In total, the algorithm’s lack of human intervention and supervision violates the legal principles enshrined in the EU’s GDPR.
N/A
N/A
In 2009, during the fallout of the financial crisis, American Express implemented algorithms to identify the spending patterns of customers who had trouble paying their bills and reduce the financial risk these customers posed to the company by limiting their credit score (O’Neil 2018). As a result, shopping in certain establishments could lower a person’s credit score. However, often these specific establishments were those frequented by persons who already possessed financial difficulties, thus persons in lower-income brackets were discriminated against by the algorithm.
Those affected saw their credit scores go down and, therefore, their borrowing costs go up. This, in combination with a limited ability to access credit, created a dire financial situation for many people. The algorithm effectively punished those who were already struggling with yet more financial problems (Lieber 2009).
N/A
N/A
The US department of Agriculture (USDA) uses an algorithm to screen uses of government assistance for fraud. The system tracks usage of the benefits and, where a misuse is flagged, trigger a suspicious activity flag that leads to issuance of a charge letter who the person accused of misusing the benefits (Kim, 2007). After a charge letter is received, a retailer can file a counter within 10 days clarifying proper use of the benefits system or flaws in the calculations made by the algorithm. Despite this, in 2015, 2016 and 2017 all the cases on benefits disqualifications in five states resulted in permanent disqualification. The operation of the algorithm negatively impacts retailers in low income neighbourhoods as they often give credit to customers who may not be able to afford food. Lending credit to those receiving benefits is considered a red flag in the system so these retailers are shut down which further harms the community (Brown, 2018).
N/A
N/A
In 2022, the Ministry of Economy and Finance authorized the Italian Revenue Agency to launch an algorithm that determines taskpayers at risk of not paying their taxes (Spaggiari, 2022). The algorithm compares tax filings, earnings, property records, bank statements and electronic payments and looks for discrepancies then sends out a letter to affected taxpayers alerting them to the discrepancies and requesting an explanation (Brancolini, 2022).
The algorithm's use has raised flags about its surveillance capabilities as it increases surveillance on its citizens with more and more algorithms. Data protection policies require that the owner authorizes access to personal information therefore there is a balance to be struck on the privacy concerns relating to citizens' data (Group, 2023).
N/A
N/A
Predictim is an algorithm that generates a risk report on a person on the basis of combing through their social media posts. The algorithm was developed by the University of California at Berkeley's Skydeck accelerator. The service is specific to assessing babysitters and gives a risk score out of five to assist parents and caretakers to assess their employability(Thubron, 2018).
The algorithm uses Natural Language Processing and Computer vision algorithms and combs through Facebook (now Meta), Instagram and twitter posts to flag language that may indicate an inability to take care of children or be a responsible guardian(Wiggers, 2018). The system has been flagged for several concerns, primarily inaccuracy and illegal data scraping. In the first place persons who have assessed babysitters through the system have found dangerous risk scores that do not correlate to reality(Lee, 2018).
N/A
N/A
A French algorithmic admission tool ‘Parcoursup’ made headlines in 2018 when it denied many prospective students from French public universities (AlgorithmWatch 2019). Reports indicate that the Parcoursup system advised French universities to reject and admit students based on opaque selection criteria (ibid). The French government failed to notify users that their rejection was the product of algorithmic decision-making but it did publish the source code behind the matching selection. The software allegedly produces its decisions from processing personal data and applicant information (ibid). Citizens, outraged, claimed that the algorithm violated their right to enroll in a university once having obtained a high-school diploma.
N/A
N/A
In Allegheny County, Pennsylvania, software containing a statistical model has been designed to predict potential child abuse and neglect by assigning a risk score to every case (Hurley 2018). This score ranges from 0 (lowest) to 20 (highest) and is the result of data processing from eight databases. The initiative involves various agencies including jails, public-welfare services, and psychiatric and drug treatment centers. The function of the score is to help social workers determine if an investigation should be carried out during the case assessment. The system hopes to save lives by alerting caseworkers to the most serious cases and allocating available resources in a manner that prioritizes these high-risk cases. In absence of the system, human decision-making had been found responsible for allotting the system’s resources quite inefficiently, admitting 48% of the high-risk families and excluding 27% of the high-risk families (ibid).
The algorithm has a mixed social impact. The s
N/A
N/A
Short for Social Finance, SoFi is a fintech company that uses AI and Big data to aggregate a number of factors to determine a person's creditworthiness (Abkarians, 2018). The firm primarily gives loans to students in the US using FICO scores and its model which projects a students' likely future ability to repay. The algorithm uses data such as mortgage refinancing and social media posts(Agrawal, 2017).
On one hand, using proxies to determine creditworthiness can open up new channels of finance particularly for students. On the other hand, it has been noted in several studies that using proxies is not an accurate or effective way of determining ability to repay and often is prejudicial to anyone who does not fit the 'proxy mold'(Bruckner, 2018).
N/A
N/A
The Princeton Review, an educational company, offers an online tutoring service and charges between $6,600 and $8,400 for its packages. In order to purchase a package, a customer must enter a zip code(Miller & Hosanagar, 2019). A ProPublica analysis found that ZIP codes with a high median income or a large Asian population were more likely to be quoted the highest prices. Customers in predominantly Asian neighborhoods were nearly twice as likely to see higher prices compared to the general population, even if it was a low-income area (Larson, 2015).
When queried, the Princeton Review replied that its prices are simply determined by geographic region with no intended effect on race. This poses several problems as geographic distribution is highly racialised and a negative effect can be seen by certain populations due to a number of historical and socio-cultural factors that are not their fault(Jr, 2016).
N/A
N/A
Over 25 cities use a tool called the Market Value Analysis algorithm to classify and map neighborhoods by market strength and investment value. Cities use MVA maps to craft tailored urban development plans for each type of neighborhood. These plans determine which neighborhoods receive housing subsidies, tax breaks, upgraded transit or greater code enforcement. Cities using the MVA are encouraged by its developer to prioritize investments and public subsidies first in stronger markets before investing in weaker, distressed areas as a way to maximize the return on investment for public development dollars.
In Detroit, city officials used the MVA to justify the reduction and disconnection of water and sewage utilities, as well as the withholding of federal, state and local redevelopment dollars in Detroit’s “weak markets,” which happened to be its Blackest and poorest neighborhoods. The recommendations from Indianapolis’ MVA meant small business support, home repair and rehabilitation,
N/A
N/A
ZipRecruiter is essentially an online job board with a range of personalized features for both employers and jobseekers. ZipRecruiter is a quintessential example of a recommender system, a tool that, like Netflix and Amazon, predicts user preferences in order to rank and filter information—in this case, jobs and job candidates. Such systems commonly rely on two methods to shape their personalized recommendations: content-based filtering and collaborative filtering. Content-based filtering examines what users seem interested in, based on clicks and other actions, and then shows them similar things. Collaborative filtering, meanwhile, aims to predict what someone is interested in by looking at what people like her appear to be interested in.
Personalized job boards like ZipRecruiter aim to automatically learn recruiters’ preferences and use those predictions to solicit similar applicants. Like Facebook, such recommendation systems are purpose-built to find and replicate patterns in user
N/A
N/A
In 1999, Los Angeles County began using a decision making model, Structured Decision Making (SDM) to assess the risk of children being mistreated (Teixeira & Boyas, 2017). Concerns on the accuracy and effectiveness of the tool led to a review and report by the Los Angeles County Blue Ribbon Commission on Child Protection which included the adoption of a predictive analytics tool (Slattery, 2015). The tool, named the Approach to Understanding Risk Assessment (AURA) which would utilise risk factors from previous cases to determine risk of mistreatment in future cases (Nash, 2022).
The tool would hopefully streamline child protection efforts and allow a better use of resources however several problems cropped up, beginning with the SBM. The accuracy of the SBM was called into question meaning the majority of data that AURA was modelled on was found to be flawed (Slattery, 2015). Secondly, manipulation of the system by knowing parents or caretakers as well as social workers meant that the
N/A
N/A
The New Zealand Ministry of Social Development, in 2016, commissioned a research team to explore a predictive modelling tool which would help prevent adverse events for children younger than 5 years old. The pilot tool, was intended to be modelled after Allegheny County's screening tool and would look at data tracking a family's contact with public systems assigning a risk score to around 60,000 children born in New Zealand in 2015 (Loudenback, 2015).
The tool would utlilise 132 data points that the government has on file about a child's caregivers including the caregiver's age and whether the parent is single (Jackson, 2016). Then opposition party spokesperson Jacinta Arden called on the officials to immediately stop the study with UNICEF New Zealand's advocacy manager stating that assigning risk scores to newborns and waiting to see if they were abused would be a gross breach of human rights and would attach a layer of objectivity to a process that required more human than artificia
N/A
N/A
The IB’s algorithm rendered judgment using data that mostly relied on contextual factors outside of students’ own performance. Its reliance on schools’ historical data — using trends of past students to guess how a single student might have achieved — risked being unfair to atypical students, such as high performers in historically low-performing schools.
Because 94 schools new to the IB lacked sufficient data, their students’ grades were adjusted to fit the past performance of students from other schools, who may have had different circumstances. And the IB’s “active” use of teacher-predicted grades was puzzling, absent an explanation of how it would mitigate acknowledged systemic inaccuracies.
N/A
N/A
Many AI efforts within mental healthcare have been directed toward suicide. At the most basic level, ML algorithms can identity suicide risk factors, not just in isolation, but also while integrating complex interactions across variables. ML techniques applied to well-characterized datasets have identified associations between suicide risk and clinical factors such as current depression, past psychiatric diagnosis, substance use and treatment history (Jenkins et al., 2014; Passos et al., 2016), with additional analytics highlighting environmental effects (Bonner and Rich, 1990; Fernandez-Arteaga et al., 2016). Multi-level modeling of Internet search patterns has also been used to identify risk factors of interest (Song et al., 2016), with Google data analytics providing a better estimate of suicide risk than traditional self-report scales in some cases (Ma-Kellams et al., 2016).
Once identified through exploratory analyses, risk factors can be used to inform prediction models. Predicti
n/a
N/A
The European Travel Information and Authorization System (ETIAS) has for example been adopted to "provide a travel authorization for third-country nationals exempt from the visa requirement enabling consideration of whether their presence on the territory of the Member States does not pose or will not pose a security, illegal immigration or a high epidemic risk” (Recital 9 of the Regulation (EU) 2018/1240 of 12 September 2018 establishing a European Travel Information and Authorization System (ETIAS).
The ETIAS regulation provides for an algorithm which is a pre travel assessment of whether a visa exempt TCN has raised any security concerns or public concerns due to their movement across the border. The algorithm has raised questions as many of the variables in its calculation may be determined to be discriminatory.
N/A
N/A
Delia (the Dynamic Evolving Learning Integrated Algorithm system) is a predictive policing system operating in Italy. The system was developed by Italian police in conjunction with KeyCrime in order to act as a crime 'forecast' by location and potential offender. The software analyses up to 1.5 million variables and assesses four criminogenic elements: type of crime, objective, modus operandi and the psycho-physical characteristics of the perpetrator (including tattoos, piercings and clothing).
The algorithm operates on the basis that hotspot analysis can lead to a higher rate of solving and predicting crime. The algorithm makes correlations among the myriad data elements to link a crime with a potential criminal. The algorithm has been shown to increase the rate of cases resolved and in turn, reduce police resources and educe overall crime.
N/A
N/A
Upstart is a lending platform that uses artificial intelligence that is designed to improve access to affordable credit to those who might be a certain demographic that traditional ways would have made it harder for these individuals to get a loan approved. Traditional ways are still being used because we are comfortable with looking at the length of someone’s credit history and making the decision if they are qualified to get an amount of money from a bank or a lending company.
The company uses machine learning algorithms along with artificial intelligence to take several factors into consideration when deciding how creditworthy a potential borrower is. It considers components such as one’s specific occupation, employer, education, or current institution where they may still be in school. In broadening the scope of elements that lenders can use, borrowers can be approved for loans that they may not have been approved for under traditional FICO considerations.
N/A
The Houston Federation of Teachers filed a suit against the Houston Independent School disctrict, challenging the use of the SAS. The lawsuit was the m
The SAS system evaluates teachers based on students' test scores. These algorithms typically use data on student performance, such as test scores and grades, to calculate a score that reflects the teacher's effectiveness(Collins, 2014). The exact method used to calculate the score can vary, but many of these algorithms use statistical techniques to control for factors such as student demographics and previous academic achievement. The goal of these algorithms is to provide a fair and objective measure of a teacher's performance that can be used to support professional development and identify areas for improvement(Collins, 2014). The algorithm is also intended to hold teachers accountable and provide positive and negative incentives to improve their instruction(Amrein-Beardsley & Geiger, 2020).
The US Department of Education introduced a program that provided grants to schools depending on their achievement of certain metrics and the evaluations provided by the SAS(The Secret Algorith
N/A
In 2019, the Public Defender Service challenged the use of SAVRY in Court. The defendant was a juvenile who was charged with robbery and pleaded guilty and showed good conduct in the time leading up to the sentencing. Prior to sentencing, the SAVRY determined that the defendant was at a high risk for committing acts of violence which vitiated his probation and led to the defense lawyers challenging the use of the algorithm(Richardson et al., 2019).
The defense lawyer pointed out several flaws in the algorithm's functionality that prejudiced the defendant particularly where proxy data such as parental criminality and the number of police contact the juvenile had. The judge disallowed the use of SAVRY in the case but did not extend the ruling to other instances where SAVRY would be used(Richardson et al., 2019).
SAVRY is a risk assessment tool for juvenile offenders(Vincent & Viljoen, 2020). The algorithm, developed in 2003 operates by using 24 risk factors and 6 protective factors(Richardson et al., 2019). The risk factors, divided into: historical (past supervision/intervention efforts etc.), individual and social/contextual (peer rejection, substance abuse difficulties) make up the score which results in a recommendation of 'low, 'moderate' or 'high' likelihood of violence by the juvenile offender(Miron et al., 2020).
Despite the algorithm's widespread use, concerns have been flagged around the algorithm's model(Chae, 2020). To begin with, the technique applied in SAVRY cannot be tested for falsity, the algorithm does not have a known error rate and furthermore, it enhances the likelihood that it will be misused when assessing low income children of color(Richardson et al., 2019). Additionally, a juvenile offender is likely to have a higher score if their parent has a criminal history whic
N/A
N/A
Oregon's Department of Human Services (DHS) developed a screening tool aimed at predicting the risk that children face of winding up in foster care or being investigated in the future. The algorithm was modelled after the Allegheny Family Screening Tool which was widely criticized for its demonstrably biased outcomes(Johanson, 2022). The algorithm draws from internal child welfare data(Williams, 2020). The algorithm gives a risk score where the higher the risk score, the greater the chance the child is being neglected, which influences the decision to send out a caseworker(Johanson, 2022).
An investigation into the Allegheny algorithm began to raise flags about the accuracy and bias potential that similar algorithms posed(Burke, 2022). The model was found to have irregularly flagged a disproportionate number of black children for mandatory neglect investigations which not only caused racial bias but also took away vital resources from other children due to the presence of the bias(Wil
N/A
N/A
GPT-3 is an auto-regressive artificial intelligence algorithm developed by OpenAI. It takes billions of language inputs and works as a language prediction model. The model determines the conditional probability of the words that might appear net to it and provides an answer or fills in a sentence(Si et al., 2022). The algorithm is intended to have multiple daily uses but has already raised concerns about its ability to minimize bias (Zhang & Li, 2021).
The algorithm has, in studies, demonstrated primarily gendered and religious bias. In a series of tests to check prompt completion, analogical reasoning and story generation, Muslims were compared with terrorists in 23 percent of cases and where the algorithm was prompted to generate job ads, a majority of the ads generated showed preference for male candidates (Christopher, 2021).
N/A
In 2012, the ACLU brought a suit on behalf of medicaid beneficiaries in Idaho. The court found deep flaws in the training data used to develop the algorithm and determined that the decisions made were not representative of the citizens' actual need.(Brown, 2020)
Prior to 2011, the amount of Medicaid benefits allocated to persons in need in the State of Idaho was assessed by people. Assessors would visit the person and interview them, determining how caretakers would be assigned(Stanley, 2017). In 2011, the use of human assessors was swapped for an algorithmic tool which determines how many hours of help persons in need are allocated(Brown, 2020). The algorithm receives data form an assessor who meets with the beneficiary. The information consists of data pooled from 200 questions asked of the beneficiary(What Happens When an Algorithm Cuts Your Health Care, 2018).
The algorithm then places the beneficiary in three tiers of care with the first tier being the lowest form of support, often services in an outpatient clinic, the second tier providing more assistance and a wider range of services and the third tier providing additional supports which can mean inpatient treatment or 24 hour paid support(Lecher, 2018).
N/A
N/A
Furhat Robotics developed a hiring robot named Tengai to make hiring decisions more quickly and in the hopes of reducing bias introduced by human reviewers. Each candidate is asked the same questions in the same order and the robot provides a transcript of the answers to human reviewers who then decide whether to proceed with the application or not.
The robot is provided no information on the applicant's race, gender or other identifying characteristics, and this is touted as it's contribution to making the hiring process more unbiased. The approach is and can be flawed in a number of ways. First, an in-person interview with a rigid, formulaic approach can miss out on points of discussion that may be valuable to determining the employees' fit with the organization. Secondly, it forces candidates to negotiate around the existing technology instead of presenting their contribution to the company. Lastly, the assumption of that introducing technology at the interview stage to lessen bias
N/A
N/A
Algorithms are being increasingly used to generate art. Models receive large databases of images and users can then make specific creations without the traditional model of building landscapes or artistic elements from scratch. DALL-E 2 is a model created by OpenAI. The model is labelled as superior over its predecessor, DALL-E due to a greater accuracy, caption matching and photorealism(OpenAI, 2022).
AI generated art has been criticized for several reasons, primarily, for infringing copyright protections usually afforded to artistic works. If, for instance, an artists work is used in the training dataset, without disclosure or permission, legal complications arise particularly as the resulting images may be used for commercial and other unauthorized purposes(Marcus et al., 2022). The image selection phases may include copyrighted data to train the model to identify tags and labels. Questions are also raised about the resulting generated works and copyright assignation, does it belon
N/A
N/A
Local and federal governments in Argentina continue to invest in a future of algorithmic and databased governance. On the national level, the federal government has developed a database profiling citizens based on socioeconomic data in order to allocate social benefits more efficiently as well as a database that stores civilian biometric data to improve public safety and criminal investigations.
In June 2017, the local government of the province of Salta partnered with Microsoft to create and deploy two different predictive tools optimized for identifying teenage pregnancies and school dropouts (Ortiz Freuler and Iglesias 2018). Trained from private datasets made available by the province’s Ministry of Early Childhood, the two systems identify those with the highest risk of teenage pregnancy and dropping out of school, alerting governmental agencies upon determining high-risk subjects.
N/A
A class action suit was raised by over 10,000 citizens protesting the erroneous debt repayment notices. The class action was settled with the government agreeing to pay financial reparations which included repayments of illegally raised debts, legal fees and interest on the illegally raised debts (Rinta et al., 2021)
Centrelink is the Australian Government's programme to distribute social security payments to citizens and citizen groups such as the unemployed, retirees and indigenous Australians. The programme receives part of its data from the Australian Tax Office which provides information on citizens' income which determines their support payments (Wolf,2020). Citizens are required to report increases in earnings in order to adjust their support payments and where they fail to do so, are considered debtors to the State, having received welfare overpayments.
To align earnings with support payments, Centrelink implemented the Online Compliance Intervention (OCI) programme which would act as an automated calculation tool to sort out discrepancies in payments and earnings. The tool, which was named, Robodebt, in popular media would draw data from Centrelink and ATO and averaged a citizens estimated fortnightly earnings, calculated them against their support payments and calculated potential overpa
N/A
N/A
Schufa, is Germany's leading credit bureau. Schufa's credit scores affect housing, credit card applications and may lead to rejection by internet providers. The algorithm calculates the Schufa score by aggregating data from over 9000 companies and partners on the consumer's financial history ultimately computing a personal score between 0 and 10,000. The consumer's score is available on request to other parties to determine a person's creditworthiness (Crichton, 2019). The algorithm processes the financial data of about 70 million people in Germany, a majority of the people above the age of 18 (Kabushka, 2021).
N/A
A lawsuit was filed against the company in 2018. The plaintiff had been denied a loan extension as it was discovered that protected characteristics had been used to determine the applicant's creditworthiness. The National Non-Discrimination and Equality Tribunal of Finland ruled that the loan-granting procedure cease to be used and fined the company 100,000 euros (Orwat, 2020).
The company used a score to decide on loan applications. The score took into account gender, language and place of residence as well as proxy data. This led to broad assumptions for instance, men were less likely to be issued with loans, Finnish speaking residents had lower scores that Swedish speaking residents despite actual default rates (European Data Protection Board, 2019). Due to customer complaints the company's credit assessments were investigated and a complaint was filed with the National Non-Discrimination and Equality Tribunal by a plaintiff who alleged discrimination on the basis of his gender and mother tongue (Matzat, 2018).
The court determined that the credit scoring system had taken protected characteristics into account, contrary to the law. Additionally, the use of proxy data was found to be unlawful as it unfairly prejudiced persons in groups regardless of having a good credit history (Orwat, 2020). The plaintiff's credit records had not been taken into account no
N/A
N/A
Task rabbit is a service that matches people seeking odd jobs with other people who are looking for work. Founded in 2008, the company assesses the aspiring workers (Rabbits) thereafter allowing them to find odd jobs on the platform. After identifying a job, an aspiring rabbit might face the difficulty of several other requests and then has to engage in a bidding war where they accept a lower fee for their services. The 'rabbits' receive rankings and the rankings contribute to the likelihood of TaskRabbit suggesting them for jobs.
An assessment of the system found evidence that Black and Asian workers received lower ratings, and were recommended less by the algorithm. White men were placed higher in the ranking than Black men and the recommendation algorithm did not correct for the discrepancies and potential bias. When questioned, the company's spokesperson denied the claims of bias stating that the algorithm was driven by user experience.
Yes. By Microsoft Research and MIT Media Lab (Buolamwini, J. & al., 2018)
N/A
This particular algorithm is part of a bigger service of face recognition algorithms. It was developed by Chinese AI company Megvii to expand its services. The algorithm analysis faces to detect their gender and race. Apart from issues with gender and racial bias in its efficacy (Buolamwini, & al., 2018) there is mixed evidence that it may have been used to aid human rights abuse by the Chinese government against the Uyghur population (Alper & Psaledakis, 2021; Bergen, 2019; Dar S., 2019). It is unknown what, if any, the response of the company has been.
N/A
N/A
The State Office of Criminal Investigations in North Rhine-Westphalia developed, in 2015, a predictive policing program in line with six other major police authorities. The algorithm was designed to function on the basis of burglary risk within a certain area in each district in the police precincts (Egbert, 2018). The algorithm utilizes crime data and socio economic data to assess the likelihood of a spatial clustering of criminal offences. The model also uses potential future data to determine areas that may eventually have a higher risk of crim than others within a specific time limit of seven days (Gerstner, 2018). The algorithm is in use in 16 major police departments and is the largest predictive policing solution in Germany (Seidensticker, 2021).
Predictive policing situations are under heavier and heavier scrutiny for their claims of fairness. The primary concern is that it over-datafies persons in a certain area or belonging to a certain group and their overrepresentation in
N/A
N/A
The US Department of Homeland security and the EU Border Agency Frontex funded development of the Automated Virtual Agent for Truth Assessments in Real Time (AVATAR) which was developed by the National Center for Border Security and Immigration (BORDERS) headquartered at the University of Arizona, eventually forming Discern Science (Salmanowitz, N. 2018). The objective of the algorithm was to determine credibility of persons crossing borders by screening and interviewing them. The algorithm was field tested at airports in Romania and Arizona. (Salmanowitz, N. 2018)
The concept behind the system is to identify ‘deceiving’ indicators from facial expressions, audio clues and phrasing. The traveler is asked a number of questions by the algorithm’s virtual agent and claims to pick up up to 50 potential deception cues (Hodgson, C. 2019). The algorithm interprets the data form the responses and sends a verdict to a human patrol agent recommending the traveler either proceed or be questioned f
N/A
N/A
RealPage® AI Screening is an AI-based screening algorithm built specifically for the multifamily apartment rental industry. The solution, developed by RealPage Resident Screening and Data Science teams enables property management companies to identify high-risk renters with greater accuracy (RealPage Blog, 2019). RealPage developed industry-specific insights to determine the willingness to pay rent. Analyzing an applicant’s capability and willingness to pay rent, it is a risk assessment model to predict a renter’s financial performance (Yau, N, 2022).
RealPage AI Screening is made possible with the pairing of data science and machine learning techniques utilizing more than thirty million actual lease outcomes to evaluate renter performance over the course of a lease(RealPage Blog, 2019). AI Screening also incorporates granular third-party consumer financial data to better predict applicant risk. Screening results are available within seconds, not hours or days as with traditional mode
N/A
N/A
The German Federal Office for Migration and Refugees (Bundesamt für Migration und Flüchtlinge, Bamf) developed a language recognition software, Language and dialect identification assistance system (DIAS) to determine country of origin from the speakers’ language. The determination is intended to assess the asylum’ seeker’s truthfulness about the application.
Procedurally, a staff member of the BAMF calls an internal phone number and enters the asylum seeker’s ID data, the asylum seeker is then invited to verbally describe a specific picture over the phone after which the recorded description is labelled as the asylum-seeker’s country-specific language sample. The data is stored in a central file repository and analyzed by centrally hosted language biometrics software (Siccardo, A. 2022).
N/A
N/A
The OASys system was developed from 1999-2001 in a series of test studies and was updated and unified in 2013 in the United Kingdom with the aim of developing a better management system for offender management (National Offender Management Service, 2009). The system was approved for use by Community Rehabilitation Companies and forms a part of the National Probation System. The tool is used with all adult offenders and is said to be a predictor of reoffending within one and two years based on a set of variables including age, gender, and number of previous sanctions (Wendy Fitzgibbon, 2008).
The system classifies individual factors such as personality and temperament, societal influences, and offender demographic information to group the offender into two categories of predictive reoffending. The first is the general reoffending Predictor (OGP1) and the second is the OASys violence predictor (OVP1). The system also makes a separate Risk of Serious Harm (RoSH) assessment which determin
Yes. A technical audit was ordered by Lazio's Regional Administrative Court and was carried out by La Sapienza and Tor Vergata Universities. The algorithm was audited to assess its structure and functions. The audit revealed that there were several inefficiencies and inscrutable functions of the algorithm. The determination was that the algorithm was not suitable for its purpose based on a fragmented development process with different standards, an outdated language and poor explainability(Salvucci, Giorgi, Barchiesi, & Matteo, 2019).
Several lawsuits were filed challenging the outcomes of the algorithm and requesting re-assignation. Primarily the biggest suit that challenged the algorithm was decided in favour of the teachers and reassigned them(Chiusi, 2020).
In 2015, the then Government of Italy introduced educational reforms aiming at improving education service delivery. As part of the reforms an algorithm was developed by HP Italia and Finmeccanica to assign teachers to vacant positions based on location preference, teacher's scores in merit-based tests and the existing vacancies(Chiusi, 2020).
The rollout of the algorithm sparked controversy. First, several thousand teachers signed a petition citing unfair treatment by the algorithm, a lack of information on the algorithm's functionality and several potential legal violations including of the GDPR(Conde Nast, 2021). A technical audit was conducted and found the algorithm to be severely lacking in logic and practicality and remarked that it was difficult to assess the algorithm based on different stages of development(Salvucci et al., 2019).
N/A
N/A
The algorithm was developed by Predict Align Prevent; a non-profit company registered in Texas. The company primarily uses geospatial risk analysis to prevent child maltreatment and address high risk areas where child maltreatment might occur (Predict-Align-Prevent ,2022). The algorithm use is divided into three stages. The ‘predict’ phase involves using geospatial predictive risk modelling to narrow down areas of concern for child abuse and neglect. The ‘align’ phase uses the information provided to identify strategic community collaborators to get in contact with and provide more resources on the ground to properly distribute child welfare services (ACLU 2021).
During the ‘prevent’ phase, the algorithm aims to generate data to assess the effectiveness of prevention programs deployed and to inform future prevention efforts. The algorithm has, for example, been deployed in New Hampshire as part of a collaboration with the New Hampshire Department of Health and Human Services (DHHS). T
N/A
N/A
The PRECIRE algorithm was designed by the German company, Precire to provide hiring assistance to companies. The algorithm was trained on a dataset of which no demographic information has been given. The company boasts 38 million text evaluations from 25,000 participants. Candidates are instructed to answer questions over video or telephone and upload the recording to be parsed by the algorithm (PRECIRE,2020).
The algorithm extracts information based on their facial expressions, language, and prosodic information. This results in a personality profile that ‘predicts’ future customer relation and employee communication skills (Schick & Fischer, 2021). The company claims that the basis of the algorithm is driven by psychological studies which make communication “objectively predictable and measurable.”(PRECIRE, 2020)
N/A
N/A
Durham Constabulary developed its HART tool for five years in collaboration with researchers from Cambridge University. The tool uses a random forest modelling system to determine the risk of reoffending (Burgess, 2018). The algorithm works as a support to the custody officers who ultimately make the final decision but are guided by the algorithm’s recommendation (Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016). The algorithm was developed as part of a wider ‘Checkpoint’ programme which offers an alternative to incarceration by providing social support to offenders at a moderate risk of reoffending.
The algorithm utilises information on past offending history and proxy data such as age, gender and postcode to sort offenders into three categories: low-risk, moderate-risk and high-risk of committing new serious offences over the following two years. Those in the medium-risk category are classified as ‘likely to commit a non-serious offence’(Oswald, Grace, Urwin, & Barnes, 2018). In
N/A
N/A
The National Data Analytics Solution (NDAS) is an algorithmic intervention system piloted by West Midlands police in collaboration with London’s metropolitan Police, Greater Manchester Police and six other police forces intended to predict a person’s risk of committing a crime in future with the aim of intervening before the crime is committed (Mittelstadt et al.). The system uses data relating to around five million individuals with 1400 indicators related to crime prediction which allow the system to specify if a person is likely to commit a violent crime or become the victim of one (Zillka, Weller, & Sargeant, 2022).
The system is primarily composed of three elements: Insight Search (a mobile based search engine, Business insights (which includes officer well-being) and Insights Lab which develops policing analytics tools (Committee on Legal Affairs and Human Rights, 2020). The use of this system is questionable particularly raising concerns about consent mechanisms for data use in
N/A
N/A
The Resource Allocation System (RAS) is used to support budgeting decisions for social care needs. The software takes information about individual needs and circumstances and then provides a budgetary allocation. The claim is then processed by the relevant Social Security Department provides income support on different levels of care (Health and Social Security Scrutiny Panel, 2018). The system, developed by imosphere is currently in use by around 40 local authorities across England (Alston, 2018). The system, intended to speed up assessment processes for social care has led to reduced contact between social workers and people who have care needs.
Ultimately it adds a layer of disconnect which also prevents the social workers from assessing non-budgetary support needs (Health and Social Security Scrutiny Panel, 2018). Additionally the system is not sensitive to inflation adjustments, and preference related adjustments for instance the needs of older persons as opposed to the needs of
N/A
N/A
Xantura is a UK technology firm which claims to provide risk-based verification for social services processes. The company has developed an algorithm to process claims for Housing Benefits and/or Council Tax Reduction across several districts in the United Kingdom (Booth, 2021). The algorithm is deployed to assess the likelihood of fraud and error in benefit and tax reduction claims and was introduced to increase the efficiency of the application process (shepwayvox, 2021).
The algorithm uses over 50 variables which include the type of Claim, previous claims, number of child dependants and percentile group of statutory sick pay (Hurfurt, 2021). The company has been accused of contributing to growing discrimination as the algorithm recommends council benefit claimants for tougher social security checks and processes applications for low-risk applicants faster.
N/A
N/A
NarxCare is an algorithm used to identify patients at risk of substance misuse and abuse. The algorithm uses the multi-state Prescription Drug Monitoring Program (PMDP) data which contains years of patient prescription data from providers and pharmacies to determine a patients risk score for misuse of narcotics, sedatives and stimulants (Belmont Health Law Journal, 2020).
The score ranges from 000-999 with higher scores indicating a higher chance for the patient to abuse a prescription. The risk score is presented in the form of a 'Narx Report' which includes the risk score, any red flags in the patient's prescription (which may put them in danger of an unintentional overdose) or other adverse events. The score is available to pharmacists, doctors and hospitals and is intended to assist them in identifying patients who might be at risk for substance abuse (Ray, 2021).
N/A
N/A
The Strategic Subject List is a predictive policing approach which was deployed in Chicago and analysed several factors including criminal histories, gang affiliation and similar factors to determine the risk of their involvement in violence. The score ranked between zero and five hundred and was made available to street police officers(Posadas, 2017).
The SSL was intended to identify persons likely to be both victims and offenders and according to the Chicago Police Department in 2017, it was used as a mere risk calculation tool and that it did not place targets on citizens for arrests. Despite this assertion, concerns surrounded the misuse of the tool with press releases on persons who were arrested, were accompanied with their SSL scores. The tool was also intended to provide social services as a mitigation tool for violent crimes however data access requests revealed that the use of the SSL prioritised arrests instead of social support interventions(Kunichoff & Sier, 2017).
Yes
N/A
Smart Pricing is an algorithm created by Airbnb to automatically adjust the daily price of a property’s listing. This free tool was introduced in November 2015. This algorithm bases its prices on features like the type and location of a listing, seasonality, demand and other factors that influence the demand on the property. The algorithm is expected to be more effective than human price setting as it has access to Airbnb servers containing large amount of data. However, the opacity of the algorithm makes it difficult to access, and thus, the algorithm does not guarantee benefit to the host. (Sharma, M. 2021)
N/A
N/A
In a bid to deploy artificial intelligence in military operations, an algorithm developed to identify strike targets is in development by the U.S. Army. The algorithm is intended to make it easier for the Army to find targets using satellite images and geolocation data. The algorithm was tested in an experimental program which tested its ability to find a missile at an angle. In the initial test the algorithm had a high success rate (Defense One, 2021).
However, in a subsequent test, where the algorithm was supposed to identify multiple missiles at a separate angle, it was unable to do so. Worse still, the algorithm evaluated its own performance at a 90% success rate when in reality, it was much closer to 25%. The development of algorithms intended to be used for militaristic purposes must be especially critiqued. If not vetted properly, with accurate training data which mimics the fragility of wartime conditions, algorithms can prove fatal (AirforceMag, 2021).
N/A
N/A
[Updated version, 9 June 2022]
Isaak is an AI software launched in 2019 and developed by British company StatusToday, which was bought by Italian company Glickon in June 2020 (Modi, 2020; Glickon, 2022). Isaak aimed at providing employee behavioural analytics to employers and management.
N/A
N/A
The EAB algorithm, developed by Navigate is deployed among several major universities in the US. The company deploys algorithms which enables financial aid offices in universities to design strategies for how much funding is made available to scholarship applicants. EAB considers several factors and data points in optimizing their models including academic profile, demographic information, standardized test scores, where they live and similar data (Engler,2021).
Ultimately, the algorithm determines which students are more likely to accept an offer from a higher learning institution as well as to determine how revenue can be raised for the institution from higher tuition and targeted scholarships. Additionally, deploying the algorithm is intended to reduce the manual labour performed by admissions office employees in processing several hundreds or thousands of applications (The Regulatory Review, 2022).
N/A
N/A
At the end of 2017, Dr. John Fahrenbach, a data scientist at the University of Chicago Medicine (UCM), developed a machine learning model that used clinical characteristics to identify patients most suitable for discharge after 48 hours. Using this tool, the hospital could decrease the length of stay for patients by allocating and prioritizing care management resources, including discharge planning, home health services, and clinical or patient administrative assistant.
During the development process, Dr. Fahrenbach’s team determined that including zip codes as a feature increased the model’s predictive accuracy. However, trends in the data showed that patients from predominantly Caucasian and relatively affluent zip codes were more likely to have shorter lengths of stay. Conversely, zip codes correlated to longer lengths of stay were predominantly African American and limited income.
Yes
N/A
Glomerular filtration rate (GFR) is considered the best overall index to estimate kidney function. In 2009, CKD-EPI research group developed an upgraded version of the GFR to increase its accuracy. CKD-EPI’s algorithm converts the results of a blood test for a person’s level of the waste product creatinine into a measure of kidney function named GFR. This index is estimated from an algorithm that uses serum creatinine, age, race, sex and body size as variables. ( Levey, A. S., et. al. 2009). The obtained score is a measure that indicates the severity of a person’s disease and guides health professionals on what care they should receive, for example to refer someone to a kidney specialist or refer them for a kidney transplant.
Yes
Two external audits were conducted by Linkedin workers in which this algorithm presents bias ( Yu,Y., & Saint-Jacques, G. (2022) and Roth, J., Saint-Jacques, G., & Yu,Y., (2021)). In both audits it was used an outcome test of discrimination for ranked lists, eventhough InstaJob is a classification algorithm. For one audit, they gather data on the InstaJobs algorithm scores for approximately 193,000 jobs. In both cases, they used the scores constructed by the algorithm for each job to create a listwise ranking of the candidates, and apply outcome tests to this ranked data.
N/A
Years ago, LinkedIn discovered that their InstaJobs algorithm, a recommendation algorithm that LinkedIn uses to notify candidates of job opportunities, was biased.
InstaJobs to predict the probability the candidate will apply for the job as well as the probability the application will receive attention from the recruiter, uses features about the job posting and the candidates. The algorithm is designed to understand the preferences of both parties and to make recommendations accordingly. It generates a predicted score based on the features mentioned above for each candidate, and sends notifications to candidates above a threshold.
Yes, there is a study on auditing PageRank on large graphs by Kang et al., 2018 which states that important as it might be, state-of-the-art lacks an intuitive way to explain the ranking results by PageRank (or its variants), e.g., why it thinks the returned top-k webpages are most important ones in the entire graph; why it gives a higher rank to actor John than actor Smith in terms of their relevance w.r.t. a particular movie? In order to answer these questions, the paper proposes a paradigm shift for PageRank, from identifying which nodes are most important to understanding why the ranking algorithm gives a particular ranking result. The authors formally define the PageRank auditing problem, whose central idea is to identify a set of key graph elements (e.g., edges, nodes, subgraphs) with the highest influence on the ranking results. They formulate it as an optimization problem and propose a family of effective and scalable algorithms (AURORA) to solve it.
There is also a study on au
N/A
Larry Page and Sergey Brin developed PageRank at Stanford University in 1996 as part of a research project about a new kind of search engine. PageRank has been implemented by Google Inc. from 1st September 1998 to 24th September 2019 to provide the basis for all of Google's web-search tools. The algorithm has been used by Google Search to rank web pages in their search engine results ("PageRank - Wikipedia", 2022). Potential social impacts of the algorithm are racial discrimination, religious discrimination, social polarisation/radicalisation, disseminating misinformation. The PageRank algorithm outputs a probability distribution used to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for collections of documents of any size. It is assumed in several research papers that the distribution is evenly divided among all documents in the collection at the beginning of the computational process. The PageRank comp
N/A
N/A
Athena security focuses on the use of emerging technologies to assist in security operations. The company focuses on developing crime- alert tools with are trained using thousands of videos simulating weapon related activities in several public and private places. The algorithm is built with the purpose of scanning an area, detecting a weapon and raising a red flag to the relevant security officers (Defense One, 2019).
Despite the premise of the algorithm and its intended use to build more safety and security, several concerns abound about the use of such recognition algorithms. First there is no nuance about the operation of weapons in a legal context particularly where greater calls for armed security guards in schools are being raised or where another person exercises their right to carry a weapon. Secondly there have been instances of misidentification of items which have led to persons being injured or killed due to security officers acting without exercising full judgement (ELear
N/A
N/A
An online tech hiring platform, Gild, enables
employers to use ‘social data’ (in addition to other resources such as resumes) to rank candidates by social capital. Essentially ‘social data’ is a proxy that refers to how integral a programmer is to the digital community drawing from time spent sharing and developing code on development platforms such as GitHub (Richtel, 2013).
N/A
In 2018, the Connecticut Fair Housing Center and Carmen Arroyo filed a suit against CoreLogic alleging discrimination by the algorithm toward her son which denied him the opportunity to be fairly housed. The lawsuit stemmed from Carmen Arroyo's 2016 application to move her son into her apartment from a nursing home to facilitate his medical care following an accident that left him unable to care for himself. Her son, Mikhail Arroyo was rejected as the algorithm determined that he had a 'disqualifying criminal record' leaving him in the nursing home for another year. The criminal record in question was a charge from shoplifting which had been dropped and was deemed an infraction lower than a misdemeanor. The Connecticut federal District Court determined that consumer reporting agencies owe a duty not to discriminate in carrying out tenant-screening activities, specifically to persons with disabilities (National Housing Law Project, 2019).
CoreLogic avails a wide array of screening tools to landlords who are looking to evaluate potential tenants. CrimSAFE assesses tenant suitability by specifically searching through their criminal records and notifying the landlords if the prospective tenant does not meet the criteria that they establish(Lecher, 2019). CoreLogic states that it uses multiple sources of criminal data to make a determination including data from most states and data from the Department of Public Safety. The algorithm does not assess risk but it either qualifies or disqualifies an applicant based on criteria set by the landlord (Pazniokas, 2019).
The use of automated background checks presents several opportunities for discriminatory outcomes. The algorithms are not transparent about the exact data or weighting systems which are used to make the determinations. The landlord is also not transparent with the prospective renter about the criteria they set to disqualify and this can lead to a landlord setting st
A study by Obermeyer et al. found evidence of racial bias in this algorithm: "At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%. The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise. We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts." (Obermeyer & al, 2019)
N/A
The algorithm was developed by health services innovation company Optum to "hospitals identify high-risk patients, such as those who have chronic conditions, to help providers know who may need additional resources to manage their health". (Morse, 2019)
The algorithm uses historical and demographical data to predict how much is the patient going to cost the health-care system, as an intended proxy to try to identify those patients in higher risk and need for care. However, due to sociopolitical reasons, in the US African Americans have historically incurred in lower costs when compared to white people, which means that the algorithm systematically identified white people as more in need of care than black people who were actually sicker. (Obermeyer et al., 2019; Morse, 2019; Johnson, 2019).
N/A
N/A
In late 2020, the Stanford Medicine Hospital developed an algorithm to automate the process of deciding who among its staff prioritise to receive the first available Covid-19 vaccines. Reportedly, the algorithm was a simple rules-based formula that computed a discrete series of factors: "It considers three categories: “employee-based variables,” which have to do with age; “job-based variables”; and guidelines from the California Department of Public Health. For each category, staff received a certain number of points, with a total possible score of 3.48. Presumably, the higher the score, the higher the person’s priority in line." (Guo & Hao, 2020)
Despite the fact that resident physicians were more in risk of getting Covid-19 by being in closer contact to Covid-19 patients, the algorithm gave them a low priority, and only seven of the 1,300 resident medical staff were selected to receive one of the first 5,000 vaccines available to Stanford Medicine. Because of the way the algorithm wa
N/A
N/A
COMPAS (the acronym of Correctional Offender Management Profiling for Alternative Sanctions) is an algorithmic tool developed and launched by Northpointe (now called Equivant) in circa 1997 to asses -among other things- the risk of recidivism of prisoners, and can be used to assist judges in deciding whether and under which conditions to release a prisoner. COMPAS, which has been used in New York State since at least 2001, computes different risk factors associated to a given prisoner and produces an overall risk score of between 1 (low risk) and 10 (high risk). Throughout the years, COMPAS (as well as other similar tools) has been adopted by more and more courts in the US. (Equivant, 2019 & 2022; Angwin, 2016)
Research has shown that COMPAS tends to discriminate against people of colour by reportedly systematically assigning them higher risk scores than those given to white people under similar circumstances. A graphical example is that of an 18-year-old black woman who was arrested f
Nutri-Score is an open-source algorithm that's publicly available. (Santé publique France, 2021)
N/A
Nutri-Score is a nutrition labelling scheme that seeks to rank products and score them based on their nutritional composition. It was developed by the French Public Health authority and based on work done by researchers and other institutions (Santé publique France, 2022).
Nutri-Score uses a simple and open-source algorithm to give different food products a simple value in a five-point scale, from A (very good) to E (very bad), and a corresponding colour from green (A) to red (E). The algorithm measures the given amount of nutrients that "should be encouraged (fibers, proteins, fruits, vegetables, pulse, nuts, and rapeseed, walnut and olive oils)" and of "nutrients that should be limited: energy, saturated fatty acid, sugars, salt". (Santé publique France, 2021 & 2022).
N/A
N/A
TikTok is a social media app in which users (mostly teenagers and young adults) share and watch short videos (normally between 15 and 60 seconds long), launched in 2017 by the Chinese company ByteDance as an international version of its app Douyin, which had been available in China since 2016 (Klug & al., 2021; Shu, 2020).
Like other social media apps, TikTok uses an algorithm to automate the curation of the content its users see, and by default TikTok "presents videos to the user on their ‘for you’ page as an endless, hard to anticipate flow of auto-looped videos to swipe through". (Klug & al., 2021)
N/A
N/A
The Dutch Government developed a risk indication system to detect welfare fraud. The algorithm collects data including employment records, personal debt records and previous benefits received by an individual. Additionally, the algorithm analyses education levels and housing histories to determine which individuals are at a high risk of committing benefit fraud. The deployment of the algorithm was disastrous with over 26,000 families accused of trying to commit social benefits fraud. The families were asked to return over 1 billion dollars leading many families to be pushed to financial ruin due to having to pay back the benefits.
Additionally, people of colour were disproportionately affected with many reporting lasting mental health issues. An extensive investigation from the Dutch Data Protection authority revealed that the algorithms categorized neutral factors such as dual nationality as risky and several initial complaints that were made were not properly followed up or given lit
N/A
Mr. Jose Rodriguez, in Rodriguez v Massachusetts Parole Board SJC-13197 (2021) filed a lawsuit against the Massachusetts Parole Board. The suit alleges that the denial of his application for parole, was in part, due to the reliance of a predictive algorithm used to determine the suitability of an applicant for parole, resulting in an unconstitutional use of discretion by the parole board. Particularly, Mr. Rodriguez challenged the suitability of the algorithm for his case as he was a juvenile offender and was 60 years old at the time of filing the suit. This raised concerns about the ability of the Parole Board to take into account dynamic changes in the offender's life particularly for a juvenile offender over a long period of time. The case is currently ongoing.
LS/CMI is a predictive analytical tool used to predict recidivism in parole applicants. The algorithm is deployed in Massachusetts which licenses it from MultiHealth Systems, Inc. since 2013.The algorithm uses a point system to assess parole applicants according to 54 predictive factors including education level and marital status. A higher score indicates that the applicant is less likely to be approved for parole. The use of this tool is hidden from the applicants and the exact scoring system and weight given to risk factors as well as special considerations is not divulged leading to concerns about the accuracy of the tool (Suffolk University Law School, 2022).
In Mr. Rodriguez’s case, the context of his case and his assessment is at question because assessments of the algorithm indicate that it does not take into account unique circumstances of juvenile offenders. Where predictive tools are concerned, different circumstances must be weighted by the system differently to accurately
N/A
N/A
The Canadian government has, for several years, been working on several algorithmic solutions to automate and speed up the processing of immigration applications. These tools have been tested with Immigration Refugees and Citizenship Canada (IRCC) confirming that automated systems are already in place to triage applications and sort them into two categories: simple cases which are processed further and complex cases which are flagged for review (Meurrens, 2021).
The algorithm used, Chinook, extracts applicant information and presents it in a spreadsheet which is assigned to visa officers who can review several applications on a single screen. It also allows the officers to create risk indicators and flag applications which need further review. (Nalbandian, 2021)
N/A
N/A
Price determination for ride-share applications is typically dependent on the distance to be covered and the time it will take to cover that distance. While this model is the current basis for pricing, algorithms to determine dynamic, individualized pricing have been introduced that may be discriminatory.
A study conducted on 100 million ride hailing samples from the city of Chicago showed a tendency for the price to increase in areas that had larger non-white populations, higher poverty levels, younger residents, and high education levels. The study identified that these increases occur in these demographics because they tend to use ride hailing apps more and the price increase was based on that usage. This creates a punitive effect for those who are most in need of ride hailing services and due to the demographic distribution, it can discriminate against certain demographics.
N/A
N/A
The deterioration index is a tool used to triage patients, that is to determine how much care and treatment a patient needs based on how sick they are. The index seeks to replace the traditional triage system, done by medical staff, with an automated system that collects data from patients and makes determinations. At the initial stages of the Covid-19 pandemic, the Electronic Health Record Company released the tool to assist doctors to distinguish between low-risk and high-risk patients for transfer to the ICU. The algorithm uses different risk factors such as age, chronic conditions, obesity and previous hospitalizations.
The algorithm was deployed rapidly to America’s largest health care systems however, it had not been independently validated prior to its deployment. The risks associated with deploying triaging algorithms can be severe as a bias in the system can mean that patients do not get adequate healthcare. Concerns have been made about the lack of transparency of the model t
N/A
N/A
Amazon uses a proprietary productivity tool to assess whether its employees are meeting their work quotas. The system tracks the rate of each individual’s productivity and it generates automated responses about the quality or productivity of workers and delivers warnings or terminations to employees.
The system has led to the firing of hundreds of workers for several infractions and has faced scrutiny as well as labour disputes due to the system being excessively punitive. For workers, this has meant missing crucial breaks such as bathroom breaks and neglecting physical and mental health to keep up with their productivity quotas. Though supervisors are able to override the system, the system is relied on as a primary decision maker.
N/A
N/A
IATos (which could be translated as AICough from Spanish) is an algorithmic system designed to detect whether a person has Covid-19 by analysing an audio file of that person coughing. (Buenos Aires, 2022a)
The Buenos Aires Municipality developed IATos during 2021 and has been implementing it since February 2022. The algorithmic system is available to users through a WhatsApp bot also managed by the Buenos Aires Municipality. People can record themselves coughing and send the audio file to the chatbot on WhatsApp, which then automatically responds to that message recommending people to get tested or not, depending on whether the algorithm considers the person as suspicious of having Covid-19. (Buenos Aires, 2022a).
Even though the little public information available about TENSOR says that any systems developed would have "built-in privacy and data protection" (TENSOR, 2019; Akhgar, 2017), we haven't been able to find any kind of external audit that could confirm such protection.
N/A
TENSOR was a project developed by a consortium of the same name between 2016 and 2019, and funded by the European Commission's Horizon2020 programme. (TENSOR, 2019)
The general objective of the project was to develop a digital platform equipped with algorithmic systems for law enforcement agencies across Europe to detect as early as possible terrorist activities, radicalisation and recruitment online. (TENSOR, 2019)
Yes. In 2019, the company was audited by the National Institute of Standards and Technology (NIST) and was ranked first of 75 tested systems with a positive identification rate of 99.5% (National Institute of Standards and Technology, 2019). Despite the fact that it had the highest rate of accuracy it had a false match rate in black women that was 10 times higher than that of white women (Simonite, 2019). The company was audited again in 2021 and was found to have maintained its position as having the highest accuracy rate (NIST, 2022)
N/A
IDEMIA is a technology company based in France primarily dealing in recognition and identification services. The company’s recognition systems are used in numerous countries in the US, Australia, the European Union, West Africa, among others. IDEMIA provides several recognition systems which track faces, vehicles, number plates and various objects (Schengen Visa Info News, 2021).
The system has no restrictions on usage, including private and public spaces and particularly is included as a tool by law enforcement(Burt, 2020). This poses several risks particularly in the absence of legislation that prohibits the excessive use of recognition systems. This can mean increased mass surveillance and violation of privacy rights.
N/A
N/A
The Austrian Public Employment Service (AMS) announced plans to roll out an algorithmic profiling system for job seekers in 2018. The system was introduced with three main goals in mind:
N/A
The California Department of Insurance and Consumer Watchdog filed a complaint with a California administrative law judge, alleging unfair price discrimination and manipulation of the algorithm to drive up prices or drive down discounts. The company denies the allegations and the case is pending.
Allstate is an auto insurance company covering a wide range of policies. The company utilizes a ‘price optimisation’ algorithm which aims at developing personalised pricing for customers. The algorithm works by determining a price based on factors other than risk. The exact functioning of the algorithm was largely unknown until a 2013 rate filing submitted for approval to Maryland regulators was disapproved, which disclosed significant aspects of the way the algorithm works. The company intended to improve its rating plan by including algorithms to adjust customer rates (Sankin, 2022).
To determine the new rates, customers were divided into micro-segments based on several criteria such as birthdate, gender and zip-code. Researchers determined that the algorithm would fairly target non-white customers as well as customers between 42 and 62 years of age marking an increase of between 5 and 20 percent. The company used this model to determine pricing in several states and has faced numer
N/A
N/A
SafeRent is an algorithmic system developed by US data firm CoreLogic (today marketed under the brand SafeRent Solutions) and in use in the US since at least 2013 (CoreLogic, 2013).
SafeRent is used to produce screening reports of prospective tenants for landlords, and the algorithm works by collecting different kinds of data about people from different sources and estimating a "SafeRent Score" between 200 ("Unlikely Candidate") and 800 ("Best Candidate") by trying to predict how a prospective tenant would behave. (SafeRent, 2022)
N/A
N/A
Instagram launched in 2010 as a single stream of photos in chronological order. Facebook bought Instagram in 2012 (today they both are owned by Meta). By 2016, Instagram had introduced an algorithmic system that tried to rank and show posts in a personalised way to users, based on what they would find most interesting. (Mosseri, 2021)
These days, different parts of the Instagram app (Feed, Explore, Reels...) use different algorithms to offer different kinds of personalised content to their users. (Mosseri, 2021)
N/A
N/A
Public Safety Assessment (PSA) is a pretrial risk assessment tool developed by the Laura and John Arnold Foundation, designed to assist judges in deciding whether to detain or release a defendant before trial. The algorithm is used in several jurisdictions and includes three different risk assessment algorithms(Milgram et al., 2015). The algorithms determine a risk score based on nine factors:
N/A
The New York Civil Liberties Union (NYCLU) and Bronx Defenders filed a class action suit against ICE challenging the algorithm’s fairness in light of the uptick of the algorithm’s automatic determination of detention regardless of mitigating factors that would warrant release. The NYCLU confirmed that the algorithm had been manipulated through data that it received from a Freedom of Information Act Lawsuit. The lawsuit also alleges that the detainees have had their rights to due process violated because they were not informed that the algorithm was being used to determine their release and were not given an opportunity to challenge it. The lawsuit is intended to challenge the usage of the algorithm and restore a hearing process for detainees to determine their release. The suit is pending (Bloch-Wehba, 2020).
As part of its mandate, the U.S. Immigration and Customs Enforcement (ICE) determines whether a person arrested for an immigration offence will be detained or released on bond or simply released pending trial. ICE deploys an algorithm to determine which option to select for arrestees and the algorithm has been in use since 2013. The algorithm works by reviewing the detainee’s history including factors like their likelihood to be a threat to public safety and their likelihood to be a flight risk. The algorithm then recommends one out of four options: detention without bond, detention with the possibility of release on bond, outright release or to defer the decision to a human ICE supervisor. (Biddle, 2020)
In 2015 and 2017, changes were observed in the algorithm which indicated that the algorithm could no longer recommend that the arrestee be released. Whereas between 2013 and 2017 the algorithm recommended around 47 percent of arrestees be released, after June 2017, this number showed
Yes. Hirevue conducted two audits on its algorithm and made the results public. In December 2020, a report of Hirevue's algorithm was released by O'Neil Risk Consulting and Algorithmic Auditing (ORCAA). The algorithm was audited for fairness and bias concerns specifically assessing four areas of concern:
In 2019, the Electronic Privacy Information Center filed a complaint against Hirevue with the Federal Trade Commission (FTC) alleging unfair practices. Hirevue conducted an independent audit of its own with the results being largely positive of the company and flagging bias related problems such as the assessment of candidates with different accents. In response to concerns raised about the algorithm, the company announced that the visual aspect of the product was discontinued in March 2020 stating that they found the visual analysis did not add value to the assessments.
Hirevue is a face scanning algorithm which targets jobseekers on behalf of prospective employers. The algorithm uses face scanning and analyzes their facial movements, their words (word choice and word complexity are assessed) and manner of speaking. The algorithm is trained on data collected from previous successful employees The algorithm analyzes the data for suitability and thereafter compares it to the two highest performing employees in the company. Lastly, the algorithm creates a measurement out of 100 for the suitability of the candidate based on the previous assessments.
The employer can then choose to advance the applications of those with a score above a determined threshold or automatically reject candidates who score below a determined threshold. The algorithm has been relied on by more than 250 companies and has been the subject of scrutiny as its shortcomings are detrimental to the livelihoods of applicants and risks re-enforcing existing biases in hiring.
N/A
A lawsuit was filed against the company by a group of Deliveroo riders, backed by an Italian court found that Deliveroo had discriminated against its workers by failing to include, as part of its ranking algorithm, legally protected reasons for withholding labour. The company was fined 50,000 euro and a representative for the company stated that the algorithm had been discontinued. (Geiger 2021)
The food delivery platform delivery, Deliveroo, used an algorithm to evaluate drivers on its platforms. The algorithm would rank the driver based on several measures to determine the rider’s ability to deliver the food, and it included as part of its ranking data points on cancellations or a failure to begin a pre-booked shift. The results of the ranking meant that the app would provide drivers with a higher ranking more shifts in busier time blocks, which directly impacted their earning capacity.
The algorithm, however, was not developed to recognize mitigating circumstances that might led to a cancellation, for instance medical and health reasons and other categories of legally protected reasons for withholding labour.
There have been attempts to do external audits on the algorithm, which in effect for outsiders is like a black box.
A paper published in 2021 described the results of an audit for "discrimination in algorithm delivering job ads". The authors said their audit confirmed "skew by gender in ad delivery on Facebook". (Imana & al., 2021)
N/A
Facebook (since 2021 renamed as Meta) was initially launched in February 2004, and already since then the website used advertising to generate income. (Kirkpatrick, 2010). Over the years, Facebook's advertisement business has grown much more sophisticated as well as the main source of Facebook's income, and these days the company uses an algorithmic system to run it. To outsiders, the algorithms are a black box: its internal working is not known.
Generally speaking, there are two steps in how the Facebook Ad algorithm works. First, advertisers decide which "segments of the Facebook population to target" (Biddle, 2019), like "American men who like pizza and beer" or "Spanish young women interested in sports and organic food".
N/A
N/A
Spotify launched in 2008 as a music online streaming service. Like other such services, Spotify developed and started using a recommendation algorithm, today known as BaRT (Bandits for Recommendations as Treatments). The aim of the algorithm is to learn what users like and what they may like, and offer different kinds of personalised and curated recommendations, so that the user can keep listening to music they like and stay on the Spotify platform. (Marius, 2021)
The work of the algorithm can be clearly seen on the Spotify home screen, which is full of different categories of content being recommended to the user, from the songs they play the most, to the ones they were recently listening to, to new ones that the algorithm thinks the user will like. In 2015 Spotify added podcasts to its streaming services (Bizzaco & al., 2021), and since then podcasts are also curated and recommended by Spotify's algorithm.
N/A
N/A
In 2019, Bristol’s City Council in the UK began to make use of an algorithm which would compile data from local authorities, including the police and the NHS, to predict the likelihood of antisocial behavior, such as drug use and domestic violence in around 50,000 families across Bristol. The algorithm was developed in-house by a data analytics hub, Insight Bristol (Booth, 2019).
The algorithm relies on data from around 30 different public sector resources, including Social care systems, the Department for Work and Pensions and the Avon and Somerset Constabulary. (Booth, 2019)
N/A
N/A
Prior to 2016, the amount of Medicaid benefits allocated to persons in need in the State of Arkansas was assessed by people. Assessors would visit the person and interview them, determining how caretakers would be assigned. In 2016, the use of human assessors was swapped out in the State of Arkansas for an algorithmic tool which determines how many hours of help persons in need are allocated. The algorithm receives data form an assessor who meets with the beneficiary. The information consists of data pooled from 200 questions asked of the beneficiary. The algorithm then places the beneficiary in three tiers of care with the first tier being the lowest form of support, often services in an outpatient clinic, the second tier providing more assistance and a wider range of services and the third tier providing additional supports which can mean inpatient treatment or 24 hour paid support (Arkansas Advocates for Children and Families, 2018).
These algorithms are not only dehumanizing, redu
We haven't found any information about the DANTE algorithms having been externally and independently audited.
N/A
DANTE was a project funded by the European Commission as part of its Horizon2020 Programme, and which seems to have been active between 2016 and 2019. DANTE was developed by a consortium formed by private organisations and public bodies from EU members Italy, Spain, Belgium, Portugal, Germany, France, Ireland, Greece and Austria, as well as from the UK, which during that period was still an EU member. (DANTE Consortium, 2019).
DANTE's aim was "to deliver more effective, efficient, automated data mining and analytics solutions and an integrated system to detect, retrieve, collect and analyse huge amount of heterogeneous and complex multimedia and multi-language terrorist-related contents, from both the Surface and the Deep Web, including Dark nets". (DANTE Consortium, 2019)
N/A
N/A
After two violent attacks by right-wing extremists in 2019 in Germany, the German Ministry of Interior announced a series of measures, including the development of an algorithmic risk assessment tool to try to identify potentially violent right-wing extremists. (Flade, 2021)
At the beginning of 2020, the German Federal Criminal Police (BKA) started developing the algorithm, called RADAR-rechts (RADAR-right). First, a review of the existing literature was done, and experts were consulted. And RADAR-rechts was at least partially based on a similar algorithm that the German Police has been using since 2017 to assess the risk of violent behaviour by Islamist extremists. (Flade, 2021)
Investigative collective Lighthouse Reports and the Dutch public broadcaster, the VPRO, conducted a partial audit of the algorithm in 2021. The Rotterdam municipality "voluntarily disclosed some of the code and technical information" but not other required details, like training data. (Lighthouse Reports, 2021)
Lighthouse Reports "data scientists have so far conducted a partial audit based on the disclosed material evaluating data quality, reliability, transparency, and accuracy". (Lighthouse Reports, 2021).
N/A
Since at least 2018, the Rotterdam Municipality has been using an algorithm to estimate the risk of welfare recipients committing fraud. It's not known who developed the algorithm. (Lighthouse Reports, 2021)
The algorithm is fed personal details about every welfare recipient, "from address to mental health history to hobbies", which result in each person being assigned a particular fraud risk score. Based on that score, the Rotterdam Municipality has since 2018 put "thousands of welfare recipients" under investigation, and "hundreds have had their benefits terminated". (Lighthouse Reports, 2021)
Yoti says that its "age estimation technology has been certified by an independent auditor for use in a Challenge 25 policy area and has been found to be at least 98.86 percent reliable". (Yoti, 2022).
However, there are no other public details of that.
N/A
In March 2021, the Home Office and the Office for Product Safety and Standards of the British government launched a pilot project for "retailers, bars and restaurants" to trial technology to carry out age verification checks. (UK Home Office & Baroness Williams of Trafford, 2021).
Then, after some delay, several British supermarkets announced that they will start trialling age verification software developed by technology company Yoti, in a phase that will last from January to May 2022.
It hasn't been independently audited. But the OECD showed interest in Send@ and is running an evaluation of its performance in collaboration with the Public Employment Service in Spain (SEPE), which plans to hand in the final report in June 2022. (Expansion, 2021)
N/A
The Public Employment Service in Spain (SEPE) reportedly developed Send@ (Luengo, 2021), and has been using it since 2021 with the aim of helping job-seekers by generating customised job advice according to the job-seeker's data.
Public servant at SEPE using Send@ introduce the new job-seeker's details in the system, and then the Send@ algorithm looks among former job-seeker for profiles that are similar to that of the new job-seeker, selects those who went on to have the best careers, and checks which actions those people followed. (Expansion, 2021)
N/A
N/A
Zillow is an online real estate marketplace that allows users to virtually view homes for sale across the globe. An arm of the company, Zillow Offers developed an algorithm to estimate the cost of homes with a view to creating savings for Zillow. The algorithm failed in its ambit and instead overestimated the value of homes that were purchased by Zillow leading to a loss of more than $500 million (Stokel, 2021).
The failures of such an algorithm have effects on the ability purchase property as it renders housing availability subject to several fluctuations in misunderstood algorithmic models (Parker & Putzier, 2021). Despite Zillow reformulating the functionality of its model, similar models are deployed across the world with potentially devastating consequences.
No, and the British Department for Work and Pensions (DWP) has "rebuffed attempts to explain how the algorithm behind the system was compiled". (Savage, 2021)
In November 2021, The Guardian reported that the Greater Manchester Coalition of Disabled People (GMCDP), with the help of campaign group Foxglove, sent a letter to the DWP "demanding details of the automated process that triggers the investigations" into possible fraud, and which seems to disproportionately target disabled people (Savage, 2021).
The British government's Department for Work and Pensions (DWP) uses an algorithmic system to try and flag potential fraudulent welfare applications. It's not clear when exactly the algorithm was introduced by the DWP nor how exactly it works. (Savage, 2021)
In November 2021, it was reported that the DWP's algorithm seem to disproportionately target disabled people as possible fraudsters. After being flagged by the system, people were "subjected to stressful checks" and could face "an invasive and humiliating investigation lasting up to a year" (Savage, 2021).
We haven't found information about the system being externally audited, but its authors say that Prometea is traceable and explainable: "Prometea works through traceable, auditable and reversible machine learning. This means that it is not a 'black box', and that it is perfectly possible to establish what is the underlying reasoning that makes the prediction. (...) As a rule, all the methodology used to design Prometea is accessible, traceable and understandable, in a clear and familiar language to describe how results are reached." (Corvalán & al, 2020)
N/A
Prometea was developed by the Public Prosecutor’s Office of the City of Buenos Aires as a "supervised learning" system to help civil servants by automating tasks as an "optimizer of bureaucratic processes". It functions as an interactive expert system that, like a voice assistant, requests inputs from the civil servant and generates outputs that would've taken much longer to get without using the software. "For example, from 5 questions, you are able to complete a legal opinion by which you must reject an appeal by extemporaneous". (Corvalán & al, 2020)
Prometea works both by completely automating processes ("the algorithms connect data and information with documents automatically. The document is generated without human intervention") and by automating processes with reduced human intervention ("in many cases, it is necessary that the persons interact with an automated system, in order to complete or add value to the creation of a document"). (Corvalán & al, 2020)
We haven't found information of independent audits of such algorithms, but a data compilation and investigation by The Markup and The City media outlets found that screening algorithms at New York City high schools discriminate disproportionately against Black and Latino applicants. (Lecher & Varner, 2021)
N/A
In New York City, some high schools use screening processes, in many cases involving automated algorithms, to pick their students among all the applicant they receive. Different schools use different mechanisms, and generally these processes are opaque and it's not clear why exactly some students are accepted and why others aren't at the same school.
In May 2021, The Markup and The City media outlets published the results of a data investigation that showed that these screening processes discriminate disproportionately against students of colour, and especially those of Black and Latino origin.
No, but Amnesty International published a report on the functioning of the algorithm (Amnesty, 2021)
N/A
This is an algorithmic decision-making system that created risk profiles of childcare benefits applicants "who were supposedly more likely to submit inaccurate applications and renewals and potentially commit fraud. Parents and caregivers who were selected by this system had their benefits suspended and were subjected to investigation". (Amnesty, 2021)
"The tax authorities used information on whether an applicant had Dutch nationality as a risk factor in the algorithmic system. “Dutch citizenship: yes/no” was used as a parameter in the risk classification model for assessing the risk of inaccurate applications. Consequently, people of non-Dutch nationalities received higher risk scores. The use of the risk classification model amounted to racial profiling. The use of nationality in the risk classification model reveals the assumptions held by the designer, developer and/or user of the system that people of certain nationalities would be more likely to commit fraud or crime than peopl
Yes, an academic paper published in 2018 examined the algorithm's reliability and validity: "Results suggest there are challenges to the reliability and validity of the VI-SPDAT in practical use. VI-SPDAT total scores did not significantly predict risk of return to homeless services, while type of housing was a significant predictor. Vulnerability assessment instruments have important implications for communities working to end homelessness by facilitating prioritization of scarce housing resources. Findings suggest that further testing and development of the VI-SPDAT is necessary." (Brown, 2018)
N/A
VI-SPDAT (Vulnerability Index — Service Prioritization Decision Assistance Tool) is an algorithmic system designed to assist civil servants or community workers in the process of providing housing services to homeless people. The algorithm is meant to assess each person's different situation to assign a "vulnerability index" that may help in allocating assistance to the people deemed most in need. (Thompson, 2021; OrgCode, 2021)
After its launch by OrgCode and Common Ground in 2013, it was considered so helpful and successful that at least 40 US states started using it. (Thompson, 2021)
N/A
N/A
This is an algorithmic system developed by an ad-hoc team of experts called Valencia IA4Covid, working hosted by the Alicante node of the ELLIS Foundation and working with the Valencian regional government.
The aim of the algorithm is to prescribe what regulations would be "optimal" for the authorities to implement to keep the social costs of restrictions and the number of Covid-19 cases to a minimum.
N/A
N/A
"With all of its offices forced to close due to social distancing measures made necessary by COVID-19, Spain’s Social Security administration" needed a "solution that could replace the customer service of several physical offices and handle the high volume of inquiries during the pandemic. The solution also had to consider General Data Protection Regulation (GDPR) guidelines and offer secure online support to citizens, from providing appointment booking to digitizing documents. (...) Working closely with Google Cloud, [the Spanish Ministry for Social Security] developed the International Social Security Association (ISSA) Social Security assistant based on Dialogflow, the Google Cloud artificial intelligence technology used to create chatbots that are trained to learn the natural language of users to provide interactive responses. At the same time, Spain’s Social Security administration developed a set of cloud-based applications to collect citizens’ requests for minimum vital income b
VioGén has not been externally audited.
In 2018, the Spanish Ministry of Interior published a book about the development of the VioGén protocol (Spanish Ministry of the Interior, 2018). This book provides quite an informative account of how the algorithmic system was designed and how it works. However, the book doesn't reveal the algorithm's code or how much weight the different questions in the protocol are given (by themselves and in relation to other questions) for the algorithm to generate a risk score.
N/A
"VioGén was launched in 2007 following a 2004 law on gender-based violence that called for an integrated system to protect women [Spanish Official Gazette, 2004]. Since then, whenever a woman makes a complaint about domestic violence a police officer must give her a set of questions from a standardized form. An algorithm uses the answers to assess the risk that the women will be attacked again. These range from: no risk observed, to low, medium, high or extreme risk. If, later on, an officer in charge of the case thinks a new assessment is needed, VioGén includes a second set of questions and a different form, which can be used to follow up no the case and which the algorithm uses to produce an updates assessment of the level of risk. The idea was that the VioGén protocol would help police officers all over Spain produce consistent and standardized evaluations of the risks associated with domestic violence, and that all the cases that are denounced would benefit from a more structured
No. Spanish investigative organisation Civio asked the Spanish government to release the source code of BOSCO and the government refused. Civio took the government to court and the case is ongoing (Civio, 2019).
"Civio filed an administrative appeal on June 20 after the Council of Transparency and Good Governance (CTBG) declined to force the release of source code of software dismissing eligible aid applicants. (...)
"The complexity of the process combined with the malfunctioning software, BOSCO, and lack of information about the nature of rejections resulted in only 1,1 million people out of 5,5 potential beneficiaries profiting from the so-called Bono Social. The former government estimated 2,5 million people would receive the subsidy. (...)
"In the midst of the economic crisis, in 2009, the Spanish government passed a law subsidizing the electricity bills of about five million poor households. The subsidy, called social bonus or bono social in Spanish, has been fought in court by the country’s electric utilities ever since, with some success. Following a 2016 ruling, the government had to introduce new, tighter regulations for the social bonus and all beneficiaries had to re-register by 31 December 2018.
"Half a million refusals
N/A
N/A
A Spanish firm called Bismart has developed software that allows public agencies to predict the needs of dependent elderly people before an emergency occurs. The system aggregates data related to “social services, health, population, economic activity, utility usage, waste management and more” to give risk assessment analysis for the elderly (Algorithm Watch 2019). The company hopes to transition home care for the elderly from a “palliative to a proactive approach”, as well as allocate health care resources in the most efficient way possible (ibid).
Both the right to privacy and to non-discrimination could potentially be affected by this software. Given the amount of data sources that are processed in order to make the predictions, the system poses a substantial risk to reproducing structural biases already existing in society.
Yes, by 2019 the Espoo Municipality was discussing ethical questions related to the use of such alert systems with expert organisations such as The Finnish Center for Artificial Intelligence (FCAI).
N/A
In Espoo, the second largest Finnish city, a software aimed at identifying risk factors associated with the need for social and medical services among children has been deployed at dubious ethical costs (Algorithm Watch 2019). The model, developed by a firm called Tieto, analyzes anonymized health care and social care data of the city’s population and client data of early childhood education. Producing preliminary results, it discovered “approximately 280 factors that could anticipate the need for child welfare services” (ibid). The system was unprecedented in Finland; no other program had ever used machine learning to integrate and analyze public service data (ibid). The next iteration of the system will seek to “utilise AI to allocate services in a preventive manner and to identify relevant partners to cooperate with towards that aim” (ibid).
Finnish authorities are taking ethical and legal issues seriously, but the use of predictive algorithms by public agencies tasked with allocati
N/A
N/A
The Finnish start-up DigitalMinds has created software aimed at efficiently matching job applicants profiles with optimal ones each employment descriptions (Algorithm Watch 2019). The firm hopes to remove human participation from the job application process, ideally speeding up and making the process more reliable. The system draws from public online interfaces like Twitter, Facebook, Gmail and Microsoft Office to create a digital profile that includes a complete personality assessment (ibid).
DigitalMinds claims that it has received no objections to its analytic practices from prospective candidates thus far (ibid). However, perhaps the consent given by a job applicant should be questioned given the distinct power asymmetry between a potential employer and an applicant.
N/A
N/A
The Oakland Police Department (OPD) has deployed an algorithmic tool called ShotSpotter to fight and reduce gun violence (ShotSpotter 2018). The system detects gunshots through sound-monitoring microphones (Gold 2015). The gathered data are processed by an algorithm that identifies the type of event and alerts the police. According to the developer, the tool has been credited a reduction of gunfire incidents by 29% from 2012-2017 in Oakland (ibid). The software has also been implemented in several American cities to similar degrees of success, including New York City, Cincinnati, Denver, Chicago, St. Louis County, San Diego, and Pittsburgh.
The system faces controversy regarding the installation of its sensors in public spaces and their capacity for surveillance activities. For example, ShotSpotter has been found to erroneously record private conversations in various instances (Goode 2012). While ShotSpotter has been placed on university campuses all across America due to the prevalenc
N/A
N/A
Ride-sharing app Uber employs all of its drivers as independent contractors and continues to manage them without human supervision, only algorithms. The firm has faced criticism for creating a work environment that feels like a world of “constant surveillance, automated manipulation and threats of ‘deactivation'” (Rosenblat 2018). One African-American driver from Pompano Beach, Fla., Cecily McCall, terminated a trip early because a passenger called her ‘stupid’ and a racial epithet. She explained the situation to an Uber support representative and promptly received an automated message that said, “We’re sorry to hear about this. We appreciate you taking the time to contact us and share details” (ibid). The representative’s only recourse was to not match the passenger with McCall in the future. Outraged, McCall responded, “So that means the next person that picks him up he will do the same while the driver gets deactivated” (ibid). McCall had noticed how Uber’s algorithm effectively pun
N/A
N/A
In 2018, British local authorities implemented predictive analytical software designed to identify families in need of child social services. The algorithms were oriented towards improving the allocation of public resources and preventing child abuse (McIntyre and Pegg 2018). In order to build the predictive system, data from 377,000 people were incorporated into a database managed by several private companies. The town councils of Hackney and Thurrock both hired a private company, Xantura, to develop a predictive model for their children’s services teams. Two other councils, Newham and Bristol, developed their own systems internally.
Advocates of the predictive software argue that they enable councils to better target limited resources, so they can act before tragedies happen. Richard Selwyn, a civil servant at the Ministry of Housing, Communities and Local Government argues that “It’s not beyond the realm of possibility that one day we’ll know exactly what services or interventions w
Some studies have been made to suggest a solution to Netflix privacy issues, including:
– McSherry, F., Mironov, I (2009): Differentially Private Recommender Systems: Building Privacy into the Netflix Prize Contenders. In KDD’09, Paris, France.
N/A
Netflix employs algorithmic processing to create profiles of users from collected metadata and to predict which content they are most likely to consume. Every time one of its 300 million users selects a series or a movie, the system gathers a host of data related to the media consumption, like clicks, pauses, and indicators that the user stopped watching, et al (Narayanan 2008). Netflix aggregates such data to fashion profiles that intuit information concerning the user.
Even though the company has established several security mechanisms and uses proxies of users’ personal data to construct their profiles, the system is quite vulnerable to deanonymization attacks. Researchers Narayanan et al. (2008) were able to break the anonymization of Netflix’s database by analyzing some proxy information provided by users. Narayanan et al.’s breakthrough evidenced a “new class of statistical deanonymization attacks against high-dimensional micro-data, such as individual preferences, recommendation
N/A
N/A
The Metropolitan Police of London has implemented an automated system to assess the potential harm that gang members impose on public safety. Since 2012, the initiative has used algorithmic processing to collect, identify, and exchange data about individuals related or belonging to gangs (Amnesty International 2018). The system uses the gathered information to calculate two scores: 1) the probability an individual joins a gang and 2) their level of violence.
However, the specific metrics and criteria used to assign “harm scores” have not been revealed by the Metropolitan Police, which raises questions about transparency. Amnesty International has pointed out that the ‘’Ending Gang and Youth Violence Strategy for London’’ from 2012 gives insights regarding the issue. According to it, each gang member would be scored taking into account the number of crimes he/she has committed in the past three years “weighted according to the seriousness of the crime and how recently it was committed”
N/A
N/A
The Manchester Police Department has implemented a predictive policing system developed by IBM, which is able to identify hot zones for crimes in the city, mainly robbery, burglary and theft (Bonnette 2016). On the basis of historical data on crime hotspots, which contains a variety of variables related to crime (location, day, weather conditions, etc.), the system predicts where future criminal events are likely to take place (Dearden 2017). The system has been reported to cause a reduction in criminal activity in the targeted areas, as well as an improvement in public confidence in the police (UCL 2012).
As is the case with other predictive policing tools, the system presents great risks to privacy as well as the reproduction of structural societal biases. It has been well documented that when algorithms train from historical data sets, they learn how to emulate racially biased policing (ibid).
N/A
N/A
The New York Police Department has developed its own algorithmic facial identification system used in both routine investigations and concrete criminal events, such as terrorist attacks (Fussell, 2018). The NYPD system compares biometric data stored in its database in order to check previous criminal records and identify suspects (Garvie, et al., 2016). Critics argue the system lacks transparency, hiding internal operations that produced biased results (Ibid).
Controversy exists surrounding the system’s tendency for erroneous misidentification. Facial recognition has been proven to only be accurate when dealing with white males (Lohr, 2018). A study demonstrated that ‘the darker the skin, the more errors arise” (ibid). The technology’s disproportionate inaccuracy with people of color has sparked debate on the impact of such systems on the privacy and civil rights of racial minorities, leading Georgetown lawyers to sue the NYPD over the opacity of the facial recognition software (Fussel
N/A
N/A
The Carolinas HealthCare, a large network of hospitals and services, is currently applying an algorithmic risk-assessment system to target high-risk patients by using a variety of data such as purchasing records and other environmental variables (Data & Civil Rights Conference, 2014). Algorithmic systems are being used to analyse consumer habits of current or potential patients and construct specific risk profiles. According to a report, one of the largest hospitals in North Carolina “is plugging data for 2 million people into algorithms designed to identify high-risk patients” (Pettypiece and Robertson, 2014).
In general data analytics plays a considerable role the healthcare industry. While it poses to modernize the industry, bringing about unprecedented efficiency and advances, it also creates a unique set of challenges related to data privacy. People’s medical conditions and health can be deduced from a variety of indicators like purchasing history, phone call patterns, online brow
N/A
N/A
In 2010, a Taiwanese-American family discovered what seemed to be a malfunction in Nikon Coolpix S630 camera: every time they took a photo of each other smiling, a message flashed across the screen asking, “Did someone blink?” No one had, so they assumed the camera was broken (Rose, 2010).
Face detection, one of the latest smart technologies to trickle down to consumer cameras, is supposed to make taking photos more convenient. Some cameras with face detection are designed to warn you when someone blinks, others are triggered to take the photo when they see that you are smiling (Rose, 2010). Nikon’s camera was not broken, but their face detection software was wildly inaccurate – that is unless you’re Caucasian. Nikon has failed to comment on the error in their software but, Adam Rose of TIME magazine has offered an explanation: the algorithm probably learned to determine ‘blinking’ from a dataset of majority Caucasian people (ibid). The software has not been trained with Asian eyes and
N/A
N/A
A team of researchers from Carnegie Mellon University discovered that female job seekers are much less likely to be shown adverts on Google for highly paid jobs than men (Datta, Tschantz and Datta, 2015). They deployed an automated testing rig called AdFisher that created over 17,370 fake jobseeking profiles. The profiles were shown 600,000 advertisements by Google’s advertising platform AdSense which the team tracked and analyzed (Ibid.). The researchers’ analysis found discrimination and opacity. Males were disproportionately shown ads encouraging the seeking of coaching services for high paying jobs. The researchers were unable to deduce why such discrimination occurred given the opacity of Google’s ad ecosystem, however, they offer the experiment as a starting point for possible internal investigation or for the investigation of regulatory bodies (ibid).
N/A
N/A
Target, a large American retailer, implemented in 2012 a software designed to improve customer tracking practices in order to obtain better predictions over customers’ future purchases. The system would create profiles of customers through credit card information and would cater advertisements and target coupons to profiles with certain purchasing behavior. For example, the system recognized 25 products that, when purchased together, suggested that the buyer had a high probability of being pregnant (ibid). Upon Target identifying a possible pregnant customer, it would send coupons for maternity products, baby food, etc. In 2012, upon receiving emails with offers of products for babies, a father went to his local target demanding to speak with a manager. He yelled at the manager, saying, “My daughter got this in the mail! She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?” The manager apologized to him
N/A
N/A
In May 2015 Google launched an app for sharing and storing photos called “Google Photos”. Google Photos has an algorithmic system that groups people’s photos, by automatically labelling their albums with tags. In the beginning, the labelling algorithms from Google Photos were working well, until a mistake of the system in recognizing dark-skinned faces was revealed, when tagging black people as “gorillas”. This bias mistake was noticed by Jacky Alcine, a young man from New York, when checking his Google Photos’ profile account on June 28th of 2015. He realized that the image-recognition algorithm generated a folder titled ‘Gorillas’, which only contained images of him and his friend.
Three years after the event, Wired revealed that Google “fixed” the problem but reducing the functionalities of the system, since the company would have blocked Google photos algorithms aimed at identifying gorillas altogether (Simonite, 2018).
N/A
N/A
The rise of renewable energy and the dynamic trade of electrical energy across European markets has prompted Germany to turn to algorithms to manage its energy infrastructure. Automated systems in the form of the SCADA systems (Supervisory Control and Data Acquisition) have brought smoother feed-in, control of decentralized electricity, and an added resilience to the energy supply (AlgorithmWatch 2019). These systems utilize machine learning and data analytics to monitor and avoid supply failure and mitigate power fluctuations (ibid). Unfortunately, the systems also bring with them some costs, namely, privacy and security. Government experts argue that the new smart systems are more susceptible to cyberattacks (ibid). Given the energy grid’s position as a fundamental piece of national infrastructure, this criticism should be taken as a serious concern. Advocacy groups point out that the automated systems also allow for surveillance capabilities within the private home.
N/A
N/A
Facebook’s search function has been accused of treating pictures of men and women very differently. The algorithm was reported to readily provide results to the search “photos of my female friends” but yield no results to the search “photos of my male friends”. According to Joseph (2019) of WIRED Magazine, the algorithm assumed that “male” was a typo for the word “female”. Worse, when users searched for “photos of my female friends” the search function auto-filled suggestions like “in bikinis” or “at the beach” (Joseph 2019). Researchers explain the disparity as the algorithm promoting popular searches over those that are rarely made. This means that since people often search for “photos of my female friends in bikinis”, the algorithm has grown adept at providing such photos and suggesting the search option. Despite being written as unassuming lines of code, algorithms assume the form of whatever data is fed to them, often resulting in the candid reflection of human wants, behavior, te
N/A
A 2015 study of the University of Maryland and the University of Washington highlighted a noticeable gender bias in the Google image search results when submitting queries related to various professions (Cohn 2015). The study not only found a severe under-representation of women in image search results of ‘CEO’ and ‘Doctor’ but demonstrated a causal linkage between the search results and people’s conceptions regarding real-world professional gender representation (ibid). Google declined to comment on the study’s findings.
N/A
N/A
In an effort to combat gender bias in translations, Google Translate has transitioned to showing gender-specific translations for many languages (Lee 2018). The algorithm inadvertently replicates gender biases prevalent in society as it learns from hundreds of millions of translated texts across the web. This often results in the translation of words like “strong” or “doctor” as masculine and for other words, like “nurse” or “beautiful” as feminine (ibid). Given a biased training set of data, the algorithm has a longstanding gender bias problem. It remains to be seen if it can be entirely resolved by the new feature.
N/A
N/A
Amazon has recently abandoned an AI recruiting tool that was biased towards men seeking technical jobs (Snyder 2018). Since 2014, an Amazon team had been developing an automated system that reviewed job applicant resumes. Amazon’s system successfully taught itself to demote resumes with the word “women’s” in them and to give lower scores to graduates of various women’s colleges. Meanwhile, it decided that words such as “executed” and “captured,” which are apparently used more often in the resumes of male engineers, suggested that a candidate should be ranked more highly (ibid). The system’s gender bias failures originate from the biased training data set that it was given to learn from: Amazon’s workforce is unmistakably male-dominated, and thus the algorithm sought after resumes with characteristically male attributes. The team attempted to remove the gender bias from the system but ultimately decided to ditch the program entirely in 2017 (ibid.).
N/A
N/A
‘Black boxes’ are algorithms that exist on Internet Service Provider (ISP) platforms that allow for intelligence communities to monitor internet traffic, often in order to surveil for terrorist threats. Two years after ‘black boxes’ were first legalized in France, the first one was implemented. The program as been criticized for lacking legal oversight and has not been audited for scope or specific objectives (Algorithm Watch 2019). No doubt, these systems have had immeasurable effects on privacy, freedom of speech, and equality.
N/A
N/A
Juan Carlos Pereira Kohatsu, a 24-year-old data scientist, developed an algorithm that detects hate levels across Twitter. The Spanish National Bureau for the Fight against Hate Crimes, an office of the Ministry of the Interior, co-developed the tool and hopes to deploy it to ‘react against local outbursts of hatred’ (Algorithm Watch 2019). According to “El País” (Colomé, 2018), the algorithm tracks about 6 million tweets in 24 hours, filtering more than 500 words linked to insults, sensitive topics, and groups that frequently suffer hate crimes. Pereira’s analysis suggests that the number of hateful tweets remains relatively stable day-to-day in Spain, ranging between 3,000 and 4,000 tweets a day.
Public authorities are still figuring out the practical purposes behind implementing the tool. And, questions regarding the algorithm’s method of training remain unresolved. Until now, the algorithm has learned to classify tweets between hateful and non-hateful tweets according to subjective
N/A
N/A
Google’s search algorithm has exacted serious harm on minority groups. The search engine has been found guilty of featuring an auto-fill query “are Jews evil” and tagging African-Americans as “gorillas” within the images section. Google search has also been criticized for promoting Islamaphobia by suggesting offensive and dangerous queries through its autofill function. For example, if a user typed “does Islam” in the Google search bar, the algorithm’s first autofill suggestion was “does Islam permit terrorism” (Abdelaziz 2017). This has troubling effects in the real world given research has demonstrated a clear correlation between anti-Muslim searches and anti-Muslim hate crimes (Soltas and Stephens-Davidowitz 2015).
Google has announced that they will continuously be removing hateful and offensive content and tweaking their algorithm in order to permanently rid themselves of the problem, but experts are not optimistic it’s entirely possible. With millions of new pages coming online e
Yes, there is a study by New York University School of Law’s Brennan Center for Justice (Levinson-Waldman, 2018).
N/A
The Boston Police Department has implemented an algorithmic monitoring tool developed by Geofeedia (a social media intelligence platform) for detecting potential threats on social media. The software allows law enforcement agencies to trace social media posts and associate them with geographic locations, something that has reportedly been used to target political activists of all sorts by both police departments and private firms (Fang 2016). For instance, it was purportedly deployed against protestors in Baltimore as part of the pilot phase of the project (Brandom 2016). This triggered a reaction from major social media companies that denied Geofeedia access to their data according to a report by the American Civil Liberties Union (Cagle 2016).
On top of that, in 2016 the American Civil Liberties Union of Massachusetts (ACLUM) discovered that between 2014 and 2016 the BPD had been using a set of keywords to identify misconduct and discriminatory behaviors online without notifying the
N/A
N/A
In March 2016, Microsoft developed an algorithm chatbot called “Tay”’ that was created to seem like a teenage girl. The algorithm was designed to learn by interacting with real people on Twitter and the messaging apps Kik and GroupMe.
Tay’s Twitter profile contained the following warning: “The more you talk the smarter Tay gets”. And at first, the chatbot seemed to be work: Tay’s posts on Twitter really matched that of a typical teenage girl. However, after less than a day, Tay started posting explicit racist, sexist, and anti-Semitic content (Rodriguez 2016). A Microsoft spokesperson could only defend the behavior by saying:
N/A
N/A
Women, among other marginalized communities, often have difficulties finding work due to inherent algorithmic biases (Bharoocha 2019). A brief overview of Uber’s technical workforce illustrates this point perfectly. In 2017, the company’s technological leadership was entirely composed of White and Asian persons, with 88.7 percent of employees being male. Uber utilizes an algorithm that they claim selects top talent and speeds up the hiring process. The system evaluates the resumes of previous successful hires at Uber, other hiring data, and select keywords that align with the job description (ibid). Given the fact that the majority of Uber’s previous successful hires were White and Asian males, it makes sense that the algorithm continues to discriminate against women and persons of color: the algorithm has been trained from biased data, and thus reproduces such bias. Textio, a firm that helps companies implement gender-neutral language in job descriptions, has demonstrated how many key
N/A
N/A
Consulting companies in the United States have begun selling proprietary algorithms to universities in order to help them to target and recruit the most promising candidates according to self-selecting criteria (O’Neil 2018). For example, the firm Noel-Levitz has created a software called “Forecast Plus” that can estimate and rate enrollment prospects by geography, gender, ethnicity, field of study, academics, et al. The company, RightStudent, aggregates and sells data on prospective applicants’ finances, scholarship eligibility, and learning disabilities to colleges (idem).
Such algorithms raise questions regarding transparency and inequality reproduction. On top of the algorithm and its criteria being kept secret from the public, scholars worry that the universities will use these algorithms in order to locate specific demographics of applicants that can boost their U.S. News & World Report ranking and pay full tuition rather than focus on offering educational opportunities to the en
N/A
N/A
The Danish government had plans to create an automated system called ‘Gladsaxe’ that would track the status of vulnerable children and preemptively put them under state custody before a real crisis struck (Algorithm Watch 2019). The model utilized a points-based system that weighted various domestic circumstances or aspects with point values, for example, mental illness (3000 points), unemployment (500 points), missing a doctor’s appointment (1000 points) or dentist’s appointment (300 points). Danes responded to news of the system with national uproar and mockery (“Oh no, I forgot the dentist. As a single parent I’d better watch out now…” (idem)).
‘Gladsaxe’ was a part of a larger Danish governmental ‘ghetto-plan’ to fight against ‘parallel societies’. The strategy created a host of category-criteria for a ‘ghetto’ and special guidelines that would be applied against that area, such as “higher punishments for crimes, forcing children into public daycare at an early age, lifting the pro
N/A
N/A
In 2016 police forces on the Belgian coast began to apply predictive policing software to their operations, echoing a larger movement and investment within the Belgian government to build centralized cloud-based police data infrastructure (Algorithm Watch 2019). The chief commissioner claims that the project has been successful and correlates a 40% drop in crime to the start of the project (idem). There are hopes to further bolster the system by connecting it to Automatic Number Plate Recognition cameras (idem). Belgian federal and local police have called for the expansion of predictive policing, believing it to be a critical tool that not only saves time and money but prevents and stops crime. According to a spokesperson of the Federal Police, these tools and systems are currently being engineered for the national scale with data sets being collected and aggregated from police forces and third-party sources (idem).
Similar to other predictive policing software, the model relies on hi
Yes. An impact assessment has been carried out (Quijano-Sánchez et al., 2018), but it does not account for issues of discrimination.
N/A
VeriPol was created by an international team of researchers in order to help police identify fraudulent reports. The system uses natural language processing and machine learning to analyze a call and predict the likelihood that the report is fake. It focuses its analysis on three critical variables within a complaint: the modus operandi of the aggression, the morphosyntax (grammatical and logical composition) of the report, and the amount of detail given by the caller (Objetivo Castilla-La Mancha Noticias 2018). The system learns by being exposed to two different data sets, one formed by false complaints and the other formed by regular complaints (Pérez Colomé 2018). The system has been tested on over one thousand occasions since 2015 by the Spanish National Police and has earned a 91% accuracy rate. VeriPol was made in response to a recent increase of fabricated reports of violent robberies, intimidation, and theft (Peiró 2019). Researchers claim that this tool could help improve the
N/A
N/A
In 2009 Catalan prisons began using an algorithmic tool to evaluate the risk of violent recidivism (or reoffense of a violent crime after release) for all prisoners in the region. This mechanism, called e-Riscanvi (e-Riskchange or e-Riesgocambio), was established by the Catalan Consejeria de Justicia (Ministry of Justice) to assess the risk of violent reoffense even if they have no previous violent background. According to public documents, the ultimate aim of the system is to improve the treatment of inmates (Europa Press 2009). The system classifies behavior in four categories: self-directed (suicide or self-harm), intra-institutional (against inmates or prison staff), violation of sentence (escape or not return from a permit) and violent crimes (Generalitat de Catalunya 2016). After categorizing behavioral inputs and analyzing all 43 variables within a case, the model determines if the prisoner has a high, medium, or low risk of violent reoffense after release.
The establishment of
Yes. Scholars have encountered statistical support demonstrating the system’s production of both false positives (households that should not be included but are receiving subsidies) and false negatives (households that are in need but are excluded) as well that it discriminates against persons displaced by violence, of low-educational levels, and the elderly . Other authors also argue the system disproportionately denies minorities crucial access to subsidized health programs (Candelo, Gaviria, Polanía and Sethi, 2010).
The Constitutional Court in Colombia has heard various cases “where individuals who have been classified erroneously argue that their rights and the principle of equality have been violated in their classification into the SISBEN indexing system” (Candelo, Gaviria, Polanía and Sethi, 2010).
SISBEN is a composite welfare index used by the Colombian government to target groups for social programs. The program utilizes survey and financial data to categorize households across levels of need in order to allocate welfare services and government subsidies efficiently and fairly. While SISBEN searches for society’s most vulnerable in an attempt to achieve redistributive goals, there have been reports of discrimination and exclusion as a result of the program (Candelo, Gaviria, Polanía and Sethi, 2010). The constitutional court in Colombia has heard various cases “where individuals who have been classified erroneously argue that their rights and the principle of equality have been violated in their classification into the SISBEN indexing system” (ibid). Scholars have encountered statistical support demonstrating the system’s production of both false positives (households that should not be included but are receiving subsidies) and false negatives (households that are in need but
N/A
N/A
The Italian city of Trento has launched an ‘eSecurity’ initiative with the support of various backers and partners, including the European Commission, the Faculty of Law at the University of Trento, ICT Centre of Fondazione Bruno Kessler and the Trento Police Department (AlgorithmWatch, 2019). The project describes itself as “the first experimental laboratory of predictive urban security”, and subscribes to the criminological philosophy of ‘hot spots’ where “in any urban environment, crime and deviance concentrate in some areas (streets, squares, etc.) and that past victimization predicts future victimization” (ibid). The system is largely modeled after predictive policing platforms in the US and UK, and employs algorithms that learn to identify these criminal ‘hot spots’ from analyzing historical data sets.
While Trento’s eSecurity initiative appears to be an objective method to predict and tackle crime, in turn, allocating police resources efficiently and promoting public safety, it
Yes. In Kent (UK), the system went through an assessment after the first year in which it was deployed (PredPol operational review, 2014). However, the assessment didn't include any references to its social impact.
Later, PredPol was also studied by the Human Rights Data Analysis Group (HRDAG), which found that the algorithm was biased against neighborhoods inhabited primarily by low-income people and minorities (Lum 2016). They attribute such bias to the fact that most drug crimes were previously registered in these neighborhoods, thus police officers were directed by the algorithm to already over-policed communities. Because of this, HRDAG argues that the algorithm failed: it did not unlock data-driven insights into drug use previously unknown to police, rather it reinforced established inequalities surrounding drug policing in low-income and minority neighborhoods (ibid).
N/A
Near the end of 2013, Uruguay’s Ministry of the Interior acquired a license for a popular predictive policing software called PredPol. Predpol is a proprietary algorithm utilized by police forces around the world, including the Oakland Police Dept. in California and Kent’s Police Force in England. Trained from historical crime datasets, the software relies on a machine learning algorithm to analyze three variables (crime type, location, and date/time) in order to predict crime ‘hot spots’ (Ortiz Freuler and Iglesias 2018). From its analysis, Predpol creates custom maps that direct police attention to 150 square meter ‘hot spots’ where crime is statistically likely to occur.
PredPol—and the field of predictive policing in general—has been criticized for its opacity and capacity for discrimination.
N/A
N/A
Amidst Black Lives Matter protests and calls for racial justice reform in the United States, the story of Robert Williams –a Black man wrongfully arrested at the fault of a facial recognition software– underscores how current technologies run the risk of exacerbating preexisting inequalities.
After facial recognition software from the Michigan State Police matched Robert Williams’ face with that of a still image from a surveillance video catching a man stealing $3,800 USD of watches, officers from the Detroit Police Department arrived at Mr. Williams’ house to arrest him (Hill 2020). While Mr. Williams sat in the interrogation room knowing that he was innocent of the crime, he had no idea that his case would be the first known account in the United States of a wrongful arrest based on a facial recognition algorithm (ibid).
N/A
N/A
Content recommendation algorithms, acting as information promoters and gatekeepers online, play an increasingly important role in shaping today’s society, politics, and culture. Given that YouTube is estimated to have the second most web traffic after Google and that 70% of the videos that YouTube users watch are suggested by its algorithm, it is safe to say that YouTube’s recommendation system commands much of the worlds’ attention (Hao 2019).
From this position of power, the algorithm has amassed serious criticism, with critics asserting that the recommendation system leads users down rabbit holes of content and systematically exposes them towards extreme content (Roose 2019). Since YouTube’s algorithm is built to engage users and keep them on the platform, it often suggests content that users have already expressed interest in. The unintended but problematic repercussion of this feedback loop is “that users consistently migrate from milder to more extreme content” on the platform (
N/A
N/A
The Swedish city of Trelleborg has turned to an automated decision-making system to allocate social benefits. The system checks and cross-checks benefit applications across various municipal databases (for example, the tax agency and unit for housing support). Given the system issues decisions automatically, the city has been able to considerably downsize its social benefits department, reducing its number of caseworkers from 11 to just 3 (AlgorithmWatch 2019). The municipality also has reported that the number of people receiving social benefits has substantially decreased since implementing the automated system. While the initiative has received various innovation prizes, applicants and citizens have been left uninformed regarding the automation process.
N/A
N/A
The “S.A.R.I.” (Sistema Automatico di Riconoscimento Immagini), or Automated Image Recognition System, is an algorithmic facial recognition tool used by Italy’s national police force. The software has the capacity to process live footage and identify recorded subjects through a process of facial matching (AlgorithmWatch 2019). In 2018, SARI made headlines when it correctly identified two burglars in Brescia as a result of its algorithmic matching process. Despite being successful in that instance, questions have been raised regarding the accuracy of the software (the risks it poses to justice and safety due to a susceptibility to create false-positives and false-negatives), as well as cybersecurity and privacy guidelines it follows.
N/A
N/A
The Polish government has implemented decision-making algorithms that analyze a variety of factors in order to match children with schools. The algorithms consider factors like the “number of children, single-parent households, food allergies of children, handicaps, and material situation”(AlgorithmWatch 2019). One such system, Platforma Zarządzania Oświatą (Education Management Platform), built by Asseco Data Systems, is utilized in 20 Polish cities and has assigned students to more than 4,500 schools and preschools (ibid). The system centralizes a wide range of functions, managing “recruitment at various school levels (including pre-school recruitment), electronic diaries, student information management, attendance analysis, equipment inventory, issuing certificates and IDs, calculation and payment of student scholarships, school-parents communication, and school-organs, superior management of the organization of educational institutions (organization sheet, lesson plans, recruitment
N/A
N/A
COVID-19 has forced school administrators around the globe to make dramatic alterations. In England, Education Secretary Gavin Williamson cancelled the summer exam series and announced that an algorithm would be awarding grades to students that would have been taking the GSCEs and A-levels exams (Lightfoot 2020). Ofqual, England’s exam regulator, explained that their algorithm would assign grades using a standardized model that examined a combination of variables, including a student’s “teacher assessment, class ranking and the past performance of their schools” (Adams 2020).
This decision has not been made without controversy. Not only have experts warned that a system dependent upon teacher evaluations could likely hurt already disadvantaged students, but parents and students alike have started a grassroot movement in opposition to the algorithmic system.
N/A
N/A
Local and federal governments in Argentina continue to invest in a future of algorithmic and databased governance. On the national level, the federal government has developed a database profiling citizens based on socioeconomic data in order to allocate social benefits more efficiently as well as a database that stores civilian biometric data to improve public safety and criminal investigations.
In June 2017, the local government of the province of Salta partnered with Microsoft to create and deploy two different predictive tools optimized for identifying teenage pregnancies and school dropouts (Ortiz Freuler and Iglesias 2018). Trained from private datasets made available by the province’s Ministry of Early Childhood, the two systems identify those with the highest risk of teenage pregnancy and dropping out of school, alerting governmental agencies upon determining high-risk subjects.
N/A
N/A
The Spanish Public Employment Service (SEPE) utilizes an automated platform in order to determine unemployment benefits and distribute employment opportunities and job training to the unemployed (AlgorithmWatch 2019). Since implementation, the amount of people receiving unemployment benefits has gone down by more than 50% (ibid). While it’s unsure if the automated system is entirely to blame for the denial of benefits, scholars argue that deficient data reconciliation, or system malfunction stemming from incorrect data and input error, is an inevitable and commonplace problem within algorithmic systems and is the likely culprit within the SEPE.
N/A
N/A
The City of Los Angeles developed a Coordinated Entry System (CES) to allocate housing to homeless populations in 2013. CES is based on the effective housing-first approach, which first aims to get a roof over the head of homeless persons, and then provide assistance in other ways. First, the homeless are asked to fill out a survey, which gathers their information and organizes it into a database. Then, an algorithm ranks the cases on a “vulnerability index” so that those who need housing the most will be helped first (Misra, 2018).
While there is an argument to be made for CES’ prioritization principle (there are 58,000 unhoused people in Los Angeles County alone, and there is not currently enough housing resources for everyone), the compulsory survey has been found to ask private and even intentionally criminalizing questions regarding sensitive behavior. For example, it asks: “if you are having sex without protection; if you’re trading sex for money or drugs; if you’re thinking of h
N/A
The U.K.’s Commission for Racial Equality found St. George’s Medical School guilty of practising racial and sexual discrimination in its admissions process in 1988.
One of the first cases of algorithmic bias took place in the 1970s at St. George’s Hospital Medical School in the United Kingdom. Hoping to make the application process more efficient and less burdensome on the administration, St. George’s deployed a computer program to do initial screenings of applicants. The program was trained from a sample data set of past screenings, analysing which applicants had been historically accepted to the medical school. Due to learning from this data set, the program would go on to deny interviews to as many as 60 applicants because they were female or had names that did not sound European (Garcia 2017). In 1988, the United Kingdom’s Commission for Racial Equality charged St. George’s Medical School for practising racial and sexual discrimination throughout its admissions process (ibid). While the St. George’s had no intention of committing racial and sexual discrimination, its new computer program had learned from a structurally biased admissions proces
Yes. A study published by the MIT Media Lab in early 2019 found that Rekognition, Amazon’s facial recognition system, performed substantially worse when identifying an individual’s gender if they were female or darker-skinned. Rekognition committed zero errors when identifying the gender of lighter-skinned men, but it confused women for men 19% of the time and mistook darker-skinned women for men 31% of the time (Raji and Buolamwini 2019). Similarly, a previous test conducted by the ACLU found that while scanning pictures of members of Congress, Rekognition falsely matched 28 individuals with police mugshots (Cagle and Ozer 2018).
N/A
Facial recognition algorithms have proven their inefficacy when it comes to classifying people that are not white males. A study published by the MIT Media Lab in early 2019 found that Rekognition, Amazon’s facial recognition system, performed substantially worse when identifying an individual’s gender if they were female or darker-skinned. Rekognition committed zero errors when identifying the gender of lighter-skinned men, but it confused women for men 19% of the time and mistook darker-skinned women for men 31% of the time (Raji and Buolamwini 2019). Similarly, a previous test conducted by the ACLU found that while scanning pictures of members of Congress, Rekognition falsely matched 28 individuals with police mugshots (Cagle and Ozer 2018).
Shortly after Buolamwini’s report was published, Microsoft, IBM and the Chinese firm Megvii vowed to improve their facial recognition software whereas Amazon, by comparison, denied that the research suggested anything about the performance of it
N/A
N/A
In 2018, the Polish Ministry of Justice implemented the “System of Random Allocation of Cases” (or the System Losowego Przydziału Spraw), an algorithmic system that randomly matches judges with cases across the judicial system (Algorithm Watch 2019).
The initiative has faced intense public scrutiny since first being piloted in three Polish cities in 2017. While no quantitative investigations have been done over the topic, qualitative evidence suggests that the system’s process of matching cases is not random (ibid). Watchdog groups and activiststs raise the contention that the system’s randomness is impossible to verify given the opacity of the algorithm. While the Ministry has released documents explaining how the algorithm operates, it refuses to release the source code to the public, declining a 2017 freedom of information request by NGOs ePaństwo and Watchdog Polska to divulge the details of the system (ibid).
N/A
N/A
The Finnish startup Utopia Analytics trains machine intelligence to do content moderation on social media platforms, discussion forums, and e-commerce sites (Algorithm Watch, 2019). The firm hopes to “bring democracy back into social media” by encouraging safe community dialogue (ibid). Utopia Analytics sells an AI (specifically, a neural network) that is first trained by humans from a body of sample content and then is left to moderate on its own. Human moderators retain the ability to make decisions in complicated and controversial scenarios.
Despite the service’s implementation on various sites like Swiss peer-to-peer sales platform tutti.ch and Finnish public forum Suomi24, Mark Zuckerberg has denounced Utopia Analytic’s technology as primitive and inaccurate. He claims that automated content moderation is at least 5-10 years away due to the significant progress needed to be made before for machine intelligence can comprehend the subtleties and situational context inherent to crac
N/A
N/A
In 2016 documents surfaced evincing that the Danish police and the Danish intelligence service had bought policing software from the US firm Palantir (AlgorithmWatch 2019). The system sources information from various diverse data silos, centralizing “document and case handling systems, investigation support systems, forensic and mobile forensic systems, as well as different types of acquiring systems such as open source acquisition, and information exchange between external police bodies” (ibid). While originally adopted as an anti-terrorism measure, experts theorize that the software will undoubtedly serve as the foundation for predictive policing initiatives (ibid). Two years prior, the Danish police implemented an automatic license plate control system that was mounted on police cars and scanned license plates in order to identify persons of interest in real-time. Human rights advocates have called for more public oversight and a general transparency in regard to the surveillance sy
N/A
N/A
iBorderCtrl, software developed by the firm European Dynamics, aims to help border security screen non-EU nationals at EU borders (AlgorithmWatch 2019). By using “cheating biomarkers”, the system determines whether a person is lying during an “automated interview” using a virtual border guard system that tracks facial and body movements. If iBorderCtrl suspects the person to be lying, the system asks for biometric information and prompts a personal interview. iBorderCtrl’s purpose is “to reduce the subjective control and workload of human agents and to increase the objective control with automated means that are noninvasive and do not add to the time the traveler has to spend at the border” (ibid). That being said, the system still relies on a “human-in-the-loop principle” where real border security agents are monitoring the process.
Scholars argue that iBorderCtrl processes sensitive information while falling into a pseudo-scientific trap: detecting a lie requires complex psychologic
N/A
N/A
Amidst government plans to invest €30 million in AI infrastructure, the Flemish government has created an algorithm to streamline the process of job applicants using the public employment service (VDAB) (AlgorithmWatch 2019). The system creates profiles of job applicants based on their online behavioral data, predicting their interests and job suitability. Job seeker’s motivation to secure employment is tracked by the algorithm based on information regarding their activity, communication response rate, and frequency of usage of the VDAB (ibid).
The system has faced criticism regarding its implications for the privacy of job seekers as well as exacerbating pre-existing inequalities. Caroline Copers, the general secretary of the Flemish wing of ABVV, argues that the job seekers who require the most assistance from the public employment service of Flanders will be disadvantaged by the algorithmic system (ibid). Because this demographic has lower rates of digital literacy and computer usa