AI Fairness

Designing Equal Opportunity Algorithms

Look inside
A theory of justice for AI models making decisions about employment, lending, education, criminal justice, and other important social goods.

Decisions about important social goods like education, employment, housing, loans, health care, and criminal justice are all becoming increasingly automated with the help of AI. But because AI models are trained on data with historical inequalities, they often produce unequal outcomes for members of disadvantaged groups. In AI Fairness, Derek Leben draws on traditional philosophical theories of fairness to develop a framework for evaluating AI models, which can be called a theory of algorithmic justice—a theory inspired by the theory of justice developed by the American philosopher John Rawls.

For several years now, researchers who design AI models have investigated the causes of inequalities in AI decisions and proposed techniques for mitigating them. It turns out that in most realistic conditions it is impossible to comply with all metrics simultaneously. Because of this, companies using AI systems will have to choose which metric they think is the correct measure of fairness, and regulators will need to determine how to apply currently existing laws to AI systems. Leben provides a detailed set of practical recommendations for companies looking to evaluate their AI systems and regulators thinking about laws around AI systems, and he offers an honest analysis of the costs of implementing fairness in AI systems—as well as when these costs may or may not be acceptable.
Derek Leben is Associate Teaching Professor of Business Ethics at the Tepper School of Business at Carnegie Mellon University. As founder of the consulting group Ethical Algorithms, he has worked with governments and companies to develop policies on fairness and benefit for AI and autonomous systems.
additional book photo
additional book photo

About

A theory of justice for AI models making decisions about employment, lending, education, criminal justice, and other important social goods.

Decisions about important social goods like education, employment, housing, loans, health care, and criminal justice are all becoming increasingly automated with the help of AI. But because AI models are trained on data with historical inequalities, they often produce unequal outcomes for members of disadvantaged groups. In AI Fairness, Derek Leben draws on traditional philosophical theories of fairness to develop a framework for evaluating AI models, which can be called a theory of algorithmic justice—a theory inspired by the theory of justice developed by the American philosopher John Rawls.

For several years now, researchers who design AI models have investigated the causes of inequalities in AI decisions and proposed techniques for mitigating them. It turns out that in most realistic conditions it is impossible to comply with all metrics simultaneously. Because of this, companies using AI systems will have to choose which metric they think is the correct measure of fairness, and regulators will need to determine how to apply currently existing laws to AI systems. Leben provides a detailed set of practical recommendations for companies looking to evaluate their AI systems and regulators thinking about laws around AI systems, and he offers an honest analysis of the costs of implementing fairness in AI systems—as well as when these costs may or may not be acceptable.

Author

Derek Leben is Associate Teaching Professor of Business Ethics at the Tepper School of Business at Carnegie Mellon University. As founder of the consulting group Ethical Algorithms, he has worked with governments and companies to develop policies on fairness and benefit for AI and autonomous systems.

Photos

additional book photo
additional book photo
  • More Websites from
    Penguin Random House
  • Common Reads
  • Library Marketing