Archives

On inter-sectional bias and its remediation in data-driven models

It’s well established that machine learning has a problem with bias. Our datasets reflect the inequalities and prejudices of our daily lives, and the models we train and deploy exacerbate these even further. We even notice this in robotics (despite being generally removed from people), where localisation and mapping systems perfected in green European settings struggle in the browns and yellows of the South African Highveld.

Back in 2016, some colleagues (@nunuska @vukosi and others) and I at the CSIR submitted a funding proposal (#DecoloniseAI: Identifying and redressing structural and systemic bias in machine learning solutions) to investigate these problems and potentially devise mechanisms to address these. We’d put together a really nice team of machine learning experts, social scientists and ethnographers, and I felt we had a really strong proposal. Unfortunately, the proposal was rejected on the grounds that there was insufficient evidence (beyond the `anecdotal’ evidence we’d drawn on when framing the proposal [1,2,3,4]) that this was a problem in society (and despite the fact that part of our proposal was to systematically explore and show how widespread it was). Fast forward 5 years, I hope the reviewers feel extremely embarrassed.

Faces automatically sorted by skin tone using PCA embeddings and with no meta-data required.

However, I deeply regret being unable to pursue this line of work to the extent we had originally planned. Fortunately many others were able to do so, most notably Buolamwini and Gebru [5] with their seminal work on GenderShades (suck on that proposal reviewer 2). This, and conferences like ACM FAccT have stimulated a movement in machine learning research that has moved from a niche topic to a core pillar of the field.

For my part, I continued to explore some related ideas with colleagues here and there, but in a much scaled back form due to resource constraints, principally as part of Mkhuseli Ngxande‘s PhD on driver drowsiness detection. Our primary concern here was twofold, identifying and remedying bias. On the identification side, we set out to identify intersectionally biased models without needing to know protected (and often questionable) characteristics and traits associated with persons in our dataset. A common frustration I have with many bias remediation schemes is that they require meta-data about racial or gender classification, which seems both questionable and highly fallible (a bit like needing a pencil test to correct racism), so the first thing we did was explore visualisation to identify bias in face-based driver drowsiness detection algorithms without needing meta-data. It turns out a simple PCA visualisation scheme [6] works quite well here, and was able to highlight clear problems in driver drowsiness detection datasets.

Visualising classification error on face visualisation shows population groups and individuals where additional data is needed.

We then went on to explore correcting these problems using a meta-learning data augmentation loop, relying on generative adversarial networks to generate synthetic faces that are close to those our system was failing on [7]. This proved extremely effective in the driver drowsiness detection domain.

A meta-learning loop inspects performance on a validation set, and generates synthetic images that are similar to failure cases, in order to balance the training data.

I think this is a good start, but it has a number of problems. First, it still requires that we have a good, representative validation set that is bias free, which is the main problem we were trying to address in the first place. It also needs a large, but fair dataset to train a GAN, which although requiring less data labelling effort than for the drowsiness detection task, is still a potential problem.

But for me the biggest limitation of this work was the lack of nuance we exposed due to our inability to adequately involve the social scientists and ethnographers we had originally tried to in that early funding proposal. Bias remediation in this manner is really just plastering over the symptoms and serves to obscure, but not eliminate the underlying problems. Many have spoken about better and more comprehensive ways to address these issues (this workshop by Gebru and Denton, this thread by @math_rachel), and there are many researchers dedicated to the field doing excellent work on these lines.

For my part, these issues reinforce the need for stronger inductive biases in our models to help focus learning, the need to move away from hard decision making and to embrace uncertainty, and the importance of questioning when and whether a data-driven solution is actually needed.

References

[1]  “New Zealand passport robot thinks this Asian man’s eyes are closed ….” 9 Dec. 2016, http://www.cnn.com/2016/12/07/asia/new-zealand-passport-robot-asian-trnd/. Accessed 15 Dec. 2016.

[2]  “Google Photos labels black people as ‘gorillas’ – Telegraph.” 1 Jul. 2015, http://www.telegraph.co.uk/technology/google/11710136/Google-Photos-assigns-gorilla-tag-to-photos-of-black-people.html. Accessed 15 Dec. 2016.

[3] Norris, Pippa. Digital divide: Civic engagement, information poverty, and the Internet worldwide. Cambridge University Press, 2001.

[4] Lum, K. and Isaac, W. (2016), To predict and serve?. Significance, 13: 14–19. doi:10.1111/j.1740-9713.2016.00960.x

[5] Buolamwini, Joy, and Timnit Gebru. “Gender shades: Intersectional accuracy disparities in commercial gender classification.” Conference on fairness, accountability and transparency. 2018.

[6] M. Ngxande, J. Tapamo and M. Burke, “Detecting inter-sectional accuracy differences in driver drowsiness detection algorithms,” 2020 International SAUPEC/RobMech/PRASA Conference, Cape Town, South Africa, 2020, pp. 1-6, doi: 10.1109/SAUPEC/RobMech/PRASA48453.2020.9041105

[7] M. Ngxande, J. Tapamo and M. Burke, “Bias Remediation in Driver Drowsiness Detection Systems Using Generative Adversarial Networks,” in IEEE Access, vol. 8, pp. 55592-55601, 2020, doi: 10.1109/ACCESS.2020.2981912.