Four Fifths Rule

Also known as: 80% rule, 4/5ths rule, adverse impact ratio

Four Fifths Rule
A threshold from US employment law stating that a selection rate for any group below 80% of the highest group’s rate signals potential adverse impact. Originally designed for hiring decisions, the rule now applies to AI-powered screening tools that automate candidate evaluation.

The four-fifths rule is a legal threshold from US employment law that flags potential discrimination when any demographic group’s selection rate falls below 80% of the highest group’s rate.

What It Is

If an AI system screens job applicants and consistently passes fewer candidates from one demographic group, how would you know the gap is a problem? The four-fifths rule gives you a numeric test. Divide the selection rate of any group by the rate of the best-performing group. If the result drops below 0.8 (80%), you have a statistical red flag for adverse impact — a pattern suggesting the process may be discriminating, even unintentionally.

The rule comes from the Uniform Guidelines on Employee Selection Procedures, published in 1978. According to EEOC, a selection rate for any group that is less than four-fifths of the rate for the group with the highest rate will generally be regarded as evidence of adverse impact. In practice: if 60% of male applicants pass a screening step and only 40% of female applicants pass, the ratio is 40 divided by 60, which equals 0.67. Since 0.67 is below 0.80, the rule flags this gap as a potential problem.

Think of it like a warning light on your car’s dashboard. The light does not tell you what is wrong with the engine — it tells you something needs attention. The four-fifths rule works the same way: it does not prove discrimination, but it signals you should investigate.

What makes the rule newly relevant is automated hiring. When companies use AI-based resume screeners, interview schedulers, or skill assessments, these tools make thousands of decisions per day. According to EEOC AI Guidance, the EEOC confirmed that the four-fifths rule applies to automated selection tools, but emphasized that even smaller differences may indicate adverse impact when AI systems process large volumes of decisions. This connection between a decades-old employment standard and modern AI accountability is why fairness metrics like disparate impact analysis now sit near the center of regulatory discussions — from COMPAS-era case studies to EU AI Act compliance frameworks.

How It’s Used in Practice

When teams audit AI-powered hiring tools, the four-fifths rule is usually the first check they run. Pull the selection rates by protected group (race, gender, age, disability status), then compare every group’s pass rate against the highest group’s rate. If any ratio dips below 0.80, the tool gets flagged for deeper review.

In practice, fairness toolkits like AI Fairness 360 and Fairlearn automate this calculation across multiple protected attributes simultaneously. A data scientist running an audit feeds the model’s predictions and demographic labels into a library function that returns adverse impact ratios for every group combination. The output tells you which groups are affected and by how much.

Pro Tip: Run the four-fifths calculation on your training data and your production data separately. A model can pass on training data and fail in production if the applicant pool shifts over time. Schedule quarterly audits, not one-time checks.

When to Use / When Not

ScenarioUseAvoid
Auditing an AI hiring tool for disparate impact before deployment
Checking if a promotion algorithm treats demographic groups fairly
Measuring fairness in a medical diagnosis model with no selection decision
Initial screening of a resume-ranking system across protected groups
Evaluating individual-level fairness for a single applicant’s outcome
Assessing adverse impact in a lending model outside employment context

Common Misconception

Myth: Passing the four-fifths rule means your AI system is fair and legally compliant. Reality: The four-fifths rule is a screening indicator, not a verdict. According to EEOC, the rule is “merely a rule of thumb” and does not resolve the question of unlawful discrimination. A tool can pass the four-fifths threshold and still produce unfair outcomes through other pathways, such as proxy discrimination (where neutral-seeming criteria correlate with protected traits) or intersectional bias affecting subgroups the rule does not test individually.

One Sentence to Remember

The four-fifths rule is your first-pass alarm for bias in selection systems — if any group’s pass rate drops below 80% of the top group’s rate, stop and investigate before that statistical gap becomes a legal and ethical problem.

FAQ

Q: Does the four-fifths rule apply to AI systems, not just traditional hiring? A: Yes. The EEOC confirmed that automated decision-making tools, including AI-based screening systems, are subject to the same adverse impact analysis as traditional selection procedures.

Q: What happens when a system fails the four-fifths test? A: Failing the test does not automatically mean discrimination occurred. It triggers a deeper investigation into whether the selection criteria are job-related and consistent with business necessity.

Q: Is the four-fifths rule used outside the United States? A: The rule is US-specific, originating from EEOC guidelines. The EU AI Act addresses algorithmic fairness through different mechanisms, though the underlying concept of disparate impact analysis is recognized internationally.

Sources

Expert Takes

The four-fifths rule reduces a complex fairness question to a single ratio. That simplicity is both its strength and its limitation. The threshold captures group-level selection rate disparities effectively, but it tells you nothing about why the gap exists or whether individual decisions within the process are justified. Pair it with equalized odds and calibration checks to get a fuller picture of where bias enters the pipeline.

If you are building an audit pipeline for hiring AI, wire the four-fifths calculation into your CI process so it runs on every model update. The common mistake is treating fairness as a one-time gate. Selection rates shift as applicant demographics change. Automate the ratio check against each protected attribute, log the results, and set alerts for any group that drops below the threshold. Continuous monitoring catches drift that annual audits miss.

Every company deploying AI in hiring will face a compliance question within the next regulatory cycle. The four-fifths rule is the fastest way to answer: “Can we prove our tool is not discriminating?” Organizations that bake this metric into their deployment checklist now will spend far less when regulators come asking. Those who wait will retrofit under pressure, and retrofitting fairness is always more expensive than building it in from the start.

The quiet problem with the four-fifths rule is what it does not measure. It examines group-level outcomes but stays blind to intersectional identities — a Black woman’s experience is not the sum of “Black” plus “woman.” It normalizes a specific amount of disparity as acceptable. Who decided that a twenty percent gap between groups is where concern begins? A threshold that was pragmatic in the paper-application era deserves harder questions in a world where algorithms reject thousands of people per hour.