Document Type

Article

Publication Date

2019

OCLC FAST subject heading

Civil rights

Abstract

Can algorithms be used to advance equality goals in the workplace? A handful of legal scholars have raised concerns that the use of big data at work may lead to protected class discrimination that could fall outside the reach of current antidiscrimination law. Existing scholarship suggests that, because algorithms are “facially neutral,” they pose no problem of unequal treatment. As a result, algorithmic discrimination cannot be challenged using a disparate treatment theory of liability under Title VII of the Civil Rights Act of 1964 (Title VII). Instead, it presents a problem of unequal outcomes, subject to challenge using Title VII’s disparate impact framework only. Yet under current doctrine, scholars suggest, any disparate impact that results from an employer’s use of algorithmic decision-making could be excused as a justifiable business practice. Given this Catch-22, scholars propose either regulating the algorithms or reinterpreting the law.

This Article seeks to challenge current thinking on algorithmic discrimination. Both the “improve the algorithms” and the “improve the law” approaches focus solely on a clash between the anticlassification (formal equality) and antisubordination (substantive equality) goals of Title VII. But Title VII also serves an important antistereotyping goal: the principle that people should be treated not just equally across protected class groups but also individually, free from stereotypes associated with even one’s own group. This Article is the first to propose that some algorithmic discrimination may be challenged as disparate treatment using Title VII’s stereotype theory of liability. An antistereotyping approach offers guidance for improving hiring algorithms and the uses to which they are put, to ensure that algorithms are applied to counteract rather than reproduce bias in the workplace. Moreover, framing algorithmic discrimination as a problem of disparate treatment is essential for similar challenges outside of the employment context—for example, challenges to governmental use of algorithms in the criminal justice context raised under the Equal Protection Clause, which does not recognize disparate impact claims.

The current focus on ensuring that algorithms do not lead to new discrimination at work obscures that the technology was intended to do more: to improve upon human decision-making by suppressing biases to make the most efficient and least discriminatory decisions. Applying the existing doctrine of Title VII more robustly and incorporating a focus on its antistereotyping goal may help deliver on the promise of moving beyond mere nondiscrimination and toward actively antidiscriminatory algorithms.

Share

COinS