“Limitations of the ‘Four-Fifths Rule’ and Statistical Parity Tests for Measuring Fairness,” an article coauthored by Professor Pauline Kim of WashULaw and Manish Raghavan of MIT Sloan and EECS, has been published in the Georgetown Law Technology Review.
In the abstract, they write:
To ensure the fairness of algorithmic decision systems, such as employment selection tools, computer scientists and practitioners often refer to the so-called “four-fifths rule” as a measure of a tool’s compliance with anti-discrimination law. This reliance is problematic because the “rule” is in fact not a legal rule for establishing discrimination, and it offers a crude test that will often be over- and under-inclusive in identifying practices that warrant further scrutiny. The “four-fifths rule” is one of a broader class of statistical tests, which we call Statistical Parity Tests (SPTs), that compare selection rates across demographic groups. While some SPTs are more statistically robust, all share some critical limitations in identifying disparate impacts retrospectively. When these tests are used prospectively as an optimization objective shaping model development, additional concerns arise about the development process, behavioral incentives, and gameability. In this Article, we discuss the appropriate role for SPTs in algorithmic governance. We suggest a combination of measures that take advantage of the additional information present during prospective optimization, providing greater insight into fairness considerations when building and auditing models.
Read the article in full here.