Fair algorithms

Friday, March 31, 2017
by Cameron Shelley

An interesting piece by Matt Reynolds in New Scientist describes work that aims to make algorithms fair.  A team of computer scientists at the Alan Turing Institute in London defines a fair algorithm as follows:

[a fair algorithm is] one that makes the same decision about an individual regardless of demographic background.

In practise, this definition describes algorithms that would make the same decision about an individual regardless of their race, creed, nationality, or other considerations not morally relevant to the decision.

Making algorithms fair in this sense is important because they are increasingly used to make decisions about people's interests, such as who may receive a bank loan or be hired for a job.  To make society fair, such decisions must be fair also.

At least, that is the idea.  However, it may be the case that being fair to everyone in this sense might not be desirable.  For example, some equity programs aim to achieve a fair society by making decisions that favor the interests of underrepresented groups, such as women in STEM fields.  This form of discrimination is justified on the grounds that it reverses the effects of past discrimination, and in a timely manner. 

An algorithm that is strictly unbiased would eventually produce a fair outcome, just one that could leave a generation or more to endure the consequences of past inequities.  Would that be a case of justice delayed is justice denied?

Even so, it is encouraging to see attention paid to the issue of fairness in algorithms.  All too often, people seem to think that software is inherently objective, so that whatever a computer program recommends is, ipso facto, unbiased. 

I think one reason for this misperception is that since algorithms are uncaring, in the sense that they have no feelings about the decisions they make, then it follows that they are objective and, thus, fair. 

This notion confuses two senses of subjectivity.  On one sense, to be subjective means to have personal feelings about how things are, feelings that may be distinct from the feelings of others.  In this sense, it is true to say that algorithms as such are not subjective and thus are objective.

On another sense, to be subjective means to have distinct preferences or biases about how things should be.  In this sense, an algorithm may be quite biased: Imagine recommendation software that always answers "McDonald's" when people ask it where they should have dinner.  Very biased and not hard to code!

Curious to read more about fairness in design?  Check out my recent post on baby monitors.  Or, may I make the following, completely unbiased recommendation, my own article: Fairness in technological design?

Dublin Castle Gates of Fortitude and Justice

Dublin Caste Gates of fortitude and justice/By J.-H. Janßen (Own work) [GFDL or CC BY-SA 3.0], via Wikimedia Commons