Research into fair and explainable algorithms
Defining algorithmic fairness
Algorithms can be used to make decisions: for example, to make a selection from a pool of job applicants or to assess what kind of care someone needs. Joosje Goedhart explains, ‘There are various definitions of algorithmic fairness, where a distinction can be made between group fairness and individual fairness.’ An example of group fairness would be inviting as many male as female applicants for an interview when recruiting staff. Individual fairness means ensuring equal treatment of individuals with comparable backgrounds. It is often assumed that group fairness in an algorithm results in a less effective outcome, one with less ‘predictive power’.Quality of initial datasets is key
‘Sticking with the example of job applicants, lack of predictive power means the algorithm would fail to select the best candidates,’ Goedhart continues. ‘My study examined whether this is actually the case, by comparing the outcomes of different group fairness algorithms for various applications.’ Goedhart conducted her study, which she completed in October 2020, at the City of Amsterdam and regularly consulted experts from CBS. During this period, CBS was also working on a project about fair algorithms, in close consultation with the University of Amsterdam and Amsterdam’s City Executive. ‘My conclusion is that the formulas built into an algorithm to make it fairer don’t always deliver on their promises. The trade-off between group fairness and the predictive power of your model mainly depends on the quality of your initial dataset.’Explainable algorithms
Meanwhile, Tessa Cramwinckel was grappling with the concept of ‘explainability’ in algorithms. Her research, which is nearing completion, demonstrates that in most cases explainability simply means that scientists can explain the algorithm to each other. ‘But not to the ordinary citizen. And that’s not a good thing. For citizens, an algorithm is often a black box, yet the outcomes it generates can have a major impact, for example when algorithms are used to detect fraud.’ To gain more insight into the fairness and explainability of algorithms, Cramwinckel double-checked the outcomes of three different algorithms on three different datasets by consulting domain specialists. These are experts with a highly specialised knowledge of the specific area in which the algorithm was being used. Cramwinckel reflects, ‘When you stop to think about it, it’s odd that programmers should determine what constitutes a fair and explainable algorithm. After all, they know very little about the practical implications. With this in mind, I used the feedback from the domain specialists to make the algorithms fairer.’ Cramwinckel is enthusiastic about her approach. ‘It was interesting to get these domain specialists involved in the issue of what makes an algorithm fair and explainable. It really heightened the specialists’ awareness of these issues. My conclusion is that this approach works: the feedback from domain specialists does make an algorithm fairer.’‘I recommend that the outcomes of an algorithm should always be monitored and evaluated’
Monitoring and evaluating outcomes
‘More than anything, my research has taught me that prejudice is part of human nature,’ says Goedhart. ‘We all have prejudices or make unconscious assumptions. As I see it, the reason why algorithms are unfair is because the world is unfair. If you work with personal data, it’s vital to be aware of this and to take it into account. I therefore recommend that the outcomes of an algorithm should always be monitored and evaluated. Look at the results it produces and evaluate them. And always be aware that a good algorithm is only one aspect of fair artificial intelligence. There is more to be gained by ensuring that you have the right datasets.’ Cramwinckel recommends restraint in applying artificial intelligence. ‘Only use AI when you really need to. CBS datasets, for example, can be highly complex. It might be better to use more traditional linear models that you can explain to everyone.’Relevant information
Barteld Braaksma, innovation manager at CBS, was involved in Cramwinckel’s research as part of the CBS project Poverty and AI. ‘Both students are right when they say that it all begins with good data. If your source data are not representative, neither are your results, no matter what algorithm you use. This is an important task for CBS. We need to ensure that we have good datasets and that we also supply the relevant information: what data have been included, have any distortions in the results been corrected and, if so, how was this done? Within the Dutch AI Coalition, a partnership in which government, business, education, research institutes and civil society organisations work together to connect AI initiatives, the correct and responsible use of data in AI applications is a central theme and one CBS wants to put firmly on the map.’Knowledge partners
CBS is currently exploring the possibilities of self-learning algorithms for the production of statistics. Braaksma reveals, ‘We are looking carefully at what kind of algorithms we deploy and whether they offer genuine advantages compared to a more traditional approach. Fairness and explainability are always paramount.’ Other government bodies regularly approach CBS to learn from the knowledge and experience of its statisticians. CBS also works closely with other knowledge partners in this field, including TNO and a number of universities.The next step in AI
The Dutch government recently made a first tranche of 276 million euros available to further develop artificial intelligence in the Netherlands. The ethical, legal and social aspects of artificial intelligence form the primary focus of so-called ‘ELSA’ labs. Topics such as the fairness and explainability of algorithms are very much part of their work. This is another area in which CBS is closely involved.
Related items
- Article - Study on fair algorithms for policy-making
- Article - TNO and CBS are joining forces on transparent and verifiable AI use
- Article - How can we make our algorithms as fair as possible?
- CBS privacy regulations - Privacy