Machine Learning has become more powerful over the past decade, sparking an expansion of new applications. Some of these applications are within the social domain, in which models based on data profiles can have a significant impact on the life of individuals. In order to prevent unwanted discrimination in these models, different methods have been proposed within the field of algorithmic fairness.
The present paper aims to provide context for fairness methods, connecting technical research with public debate and practical considerations. The goal of algorithmic fairness is further defined, and separated from related problems which might lead to confusion in ongoing debate. A fairness method which relies on causality theory is discussed, making our recent technical research available for a non-specialised audience. Finally, fair algorithms are considered from the perspective of practical deployment, locating challenges in bringing the theory into practice.