DOI: 10.2139/ssrn.3547922
Terbit pada 3 Maret 2020 Pada Computer Law and Security Review

Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI

Chris Russell Sandra Wachter B. Mittelstadt

Abstrak

In recent years a substantial literature has emerged concerning bias, discrimination, and fairness in AI and machine learning. Connecting this work to existing legal non-discrimination frameworks is essential to create tools and methods that are practically useful across divergent legal regimes. While much work has been undertaken from an American legal perspective, comparatively little has mapped the effects and requirements of EU law. This Article addresses this critical gap between legal, technical, and organisational notions of algorithmic fairness. Through analysis of EU non-discrimination law and jurisprudence of the European Court of Justice (ECJ) and national courts, we identify a critical incompatibility between European notions of discrimination and existing work on algorithmic and automat-ed fairness. A clear gap exists between statistical measures of fairness as embedded in myriad fairness toolkits and governance mechanisms and the context-sensitive, often intuitive and ambiguous discrimination metrics and evidential requirements used by the ECJ; we refer to this approach as “contextual equality.”This Article makes three contributions. First, we review the evidential requirements to bring a claim under EU non-discrimination law. Due to the disparate nature of algorithmic and human discrimination, the EU’s current requirements are too contextual, reliant on intuition, and open to judicial interpretation to be automated. Many of the concepts fundamental to bringing a claim, such as the composition of the disadvantaged and advantaged group, the severity and type of harm suffered, and requirements for the relevance and admissibility of evidence, require normative or political choices to be made by the judiciary on a case-by-case basis. We show that automating fairness or non-discrimination in Europe may be impossible because the law, by design, does not provide a static or homogenous framework suited to testing for discrimination in AI systems.Second, we show how the legal protection offered by non-discrimination law is challenged when AI, not humans, discriminate. Humans discriminate due to negative attitudes (e.g. stereotypes, prejudice) and unintentional biases (e.g. organisational practices or internalised stereotypes) which can act as a signal to victims that discrimination has occurred. Equivalent signalling mechanisms and agency do not exist in algorithmic systems. Compared to traditional forms of discrimination, automated discrimination is more abstract and unintuitive, subtle, intangible, and difficult to detect. The increasing use of algorithms disrupts traditional legal remedies and procedures for detection, investigation, prevention, and correction of discrimination which have predominantly relied upon intuition. Consistent assessment procedures that define a common standard for statistical evidence to detect and assess prima facie automated discrimination are urgently needed to support judges, regulators, system controllers and developers, and claimants.Finally, we examine how existing work on fairness in machine learning lines up with procedures for assessing cases under EU non-discrimination law. A ‘gold standard’ for assessment of prima facie discrimination has been advanced by the European Court of Justice but not yet translated into standard assessment procedures for automated discrimination. We propose ‘conditional demographic disparity’ (CDD) as a standard baseline statistical measurement that aligns with the Court’s ‘gold standard’. Establishing a standard set of statistical evidence for automated discrimination cases can help ensure consistent procedures for assessment, but not judicial interpretation, of cases involving AI and automated systems. Through this proposal for procedural regularity in the identification and assessment of auto-mated discrimination, we clarify how to build considerations of fairness into automated systems as far as possible while still respecting and enabling the contextual approach to judicial interpretation practiced under EU non-discrimination law.

Artikel Ilmiah Terkait

Fairness & friends in the data science era

Giovanna Guerrini B. Catania Chiara Accinelli

9 Juni 2022

The data science era is characterized by data-driven automated decision systems  (ADS) enabling, through data analytics and machine learning, automated decisions in many contexts, deeply impacting our lives. As such, their downsides and potential risks are becoming more and more evident: technical solutions, alone, are not sufficient and an interdisciplinary approach is needed. Consequently, ADS should evolve into  data-informed ADS , which take  humans in the loop  in all the data processing steps. Data-informed ADS should deal with data responsibly , guaranteeing nondiscrimination  with respect to protected groups of individuals. Nondiscrimination can be characterized in terms of different types of properties, like fairness and diversity. While fairness, i.e., absence of bias against minorities, has been widely investigated in machine learning, only more recently this issue has been tackled by considering all the steps of data processing pipelines at the basis of ADS, from data acquisition to analysis. Additionally, fairness is just one point of view of nondiscrimination to be considered for guaranteeing equity: other issues, like diversity, are raising interest from the scientific community due to their relevance in society. This paper aims at critically surveying how nondiscrimination has been investigated in the context of complex data science pipelines at the basis of data-informed ADS, by focusing on the specific data processing tasks for which nondiscrimination solutions have been proposed.

A Review on Fairness in Machine Learning

E. Shmueli Dana Pessach

3 Februari 2022

An increasing number of decisions regarding the daily lives of human beings are being controlled by artificial intelligence and machine learning (ML) algorithms in spheres ranging from healthcare, transportation, and education to college admissions, recruitment, provision of loans, and many more realms. Since they now touch on many aspects of our lives, it is crucial to develop ML algorithms that are not only accurate but also objective and fair. Recent studies have shown that algorithmic decision making may be inherently prone to unfairness, even when there is no intention for it. This article presents an overview of the main concepts of identifying, measuring, and improving algorithmic fairness when using ML algorithms, focusing primarily on classification tasks. The article begins by discussing the causes of algorithmic bias and unfairness and the common definitions and measures for fairness. Fairness-enhancing mechanisms are then reviewed and divided into pre-process, in-process, and post-process mechanisms. A comprehensive comparison of the mechanisms is then conducted, toward a better understanding of which mechanisms should be used in different scenarios. The article ends by reviewing several emerging research sub-fields of algorithmic fairness, beyond classification.

Fairness in Machine Learning: A Survey

C. Haas Simon Caton

4 Oktober 2020

When Machine Learning technologies are used in contexts that affect citizens, companies as well as researchers need to be confident that there will not be any unexpected social implications, such as bias towards gender, ethnicity, and/or people with disabilities. There is significant literature on approaches to mitigate bias and promote fairness, yet the area is complex and hard to penetrate for newcomers to the domain. This article seeks to provide an overview of the different schools of thought and approaches that aim to increase the fairness of Machine Learning. It organizes approaches into the widely accepted framework of pre-processing, in-processing, and post-processing methods, subcategorizing into a further 11 method areas. Although much of the literature emphasizes binary classification, a discussion of fairness in regression, recommender systems, and unsupervised learning is also provided along with a selection of currently available open source libraries. The article concludes by summarizing open challenges articulated as five dilemmas for fairness research.

Auditing the AI auditors: A framework for evaluating fairness and bias in high stakes AI predictive models.

Tara S. Behrend R. Landers

14 Februari 2022

Researchers, governments, ethics watchdogs, and the public are increasingly voicing concerns about unfairness and bias in artificial intelligence (AI)-based decision tools. Psychology's more-than-a-century of research on the measurement of psychological traits and the prediction of human behavior can benefit such conversations, yet psychological researchers often find themselves excluded due to mismatches in terminology, values, and goals across disciplines. In the present paper, we begin to build a shared interdisciplinary understanding of AI fairness and bias by first presenting three major lenses, which vary in focus and prototypicality by discipline, from which to consider relevant issues: (a) individual attitudes, (b) legality, ethicality, and morality, and (c) embedded meanings within technical domains. Using these lenses, we next present psychological audits as a standardized approach for evaluating the fairness and bias of AI systems that make predictions about humans across disciplinary perspectives. We present 12 crucial components to audits across three categories: (a) components related to AI models in terms of their source data, design, development, features, processes, and outputs, (b) components related to how information about models and their applications are presented, discussed, and understood from the perspectives of those employing the algorithm, those affected by decisions made using its predictions, and third-party observers, and (c) meta-components that must be considered across all other auditing components, including cultural context, respect for persons, and the integrity of individual research designs used to support all model developer claims. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

Diversity, Equity, and Inclusion in Artificial Intelligence: An Evaluation of Guidelines

Gaelle Cachat-Rosset Alain Klarsfeld

22 Februari 2023

ABSTRACT Artificial intelligence (AI) is present everywhere in the lives of individuals. Unfortunately, several cases of discrimination by AI systems have already been reported. Scholars have warned on risks of AI reproducing existing inequalities or even amplifying them. To tackle these risks and promote responsible AI, many ethics guidelines for AI have emerged recently, including diversity, equity, and inclusion (DEI) principles and practices. However, little is known about the DEI content of these guidelines, and to what extent they meet the most relevant accumulated knowledge from DEI literature. We performed a semi-systematic literature review of the AI guidelines regarding DEI stakes and analyzed 46 guidelines published from 2015 to today. We fleshed out the 14 DEI principles and the 18 DEI practices recommended underlying these 46 guidelines. We found that the guidelines mostly encourage one of the DEI management paradigms, namely fairness, justice, and nondiscrimination, in a limited compliance approach. We found that narrow technical practices are favored over holistic ones. Finally, we conclude that recommended practices for implementing DEI principles in AI should include actions aimed at directly influencing AI actors’ behaviors and awareness of DEI risks, rather than just stating intentions and programs.

Daftar Referensi

0 referensi

Tidak ada referensi ditemukan.

Artikel yang Mensitasi

0 sitasi

Tidak ada artikel yang mensitasi.