Removing Bias in AI isn’t enough, it must take intersectionality into account.

Atibhi Agrawal
4 min readApr 23, 2019

--

“If I’m a black woman, I have some disadvantages because I’m a woman and some disadvantages because I’m black. But I also have some disadvantages specifically because I’m a black woman, which neither black men nor white women have to deal with. That’s intersectionality; race, gender, and every other way to be disadvantaged interact with each other.” (Unknown)

Simply put, intersectionality considers different systems of oppression, and specifically how they overlap and are compounded. Looking at intersectionality in AI is important because we tend to think of machines as somehow cold, calculating and unbiased. We understand that learning systems will always converge on ground truth because unbiased algorithms drive them.

However, recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. Artificial Intelligence or AI and machine learning is rapidly infiltrating every aspect of society. From helping determine who is hired, fired, granted a loan, or how long an individual spends in prison, decisions that have traditionally been performed by humans are rapidly made by algorithms. (J. Buolamwini et al.,2018)

Many AI systems, for example face recognition tools, rely on machine learning algorithms that are trained with labeled data. It has recently been shown that algorithms trained with biased data have resulted in algorithmic discrimination (Caliskan et al., 2017). For example, in an AI model that was trained for skin cancer detection, the dataset during training consisted of predominantly white males and the people who were working on building the algorithm were also white males. Hence, they did not notice the bias that the AI was forming. This led to the accuracy of the model being very bad in the real world when it was tested on people from all genders and races. Another example that shows the effect of bias in AI is the use of automated face recognition by law enforcement. A year-long research investigation across 100 police departments revealed that African-American male individuals are more likely to be stopped by law enforcement and be subjected to face recognition searches than individuals of other ethnicities. False positives and unwarranted searches pose a threat to civil liberties. Some face recognition systems have been shown to misidentify people of color, women, and young people at high rates (Klare et al., 2012). Monitoring phenotypic and demographic accuracy of these systems as well as their use is necessary to protect citizens’ rights and keep vendors and law enforcement accountable to the public.

Joy Buolamwini and her project the Algorithmic Justice League have produced a growing body of work that demonstrates the ways that machine learning is intersectionally biased. In the project “Gender Shades,” (J. Buolamwini et al.,2018) they show how computer vision algorithms trained on ‘pale male’ data sets performs best on images of white men, and worst on images of black women. In order to demonstrate this, Buolamwini first had to create a new benchmark dataset of images of faces, both male and female, with a range of skin tones. This work not only demonstrates that facial recognition systems are biased, it also provides a concrete example of the need to develop intersectional training datasets, how to create intersectional benchmarks, and the importance of intersectional audits for all machine learning systems. The urgency of doing so is directly proportional to the impacts of algorithmic decision systems on people’s life-chances.

At the institutional level, we might consider how institutions that support the development of A.I. systems reproduce the matrix of domination in their practices. Intersectional theory compels us to consider how these and other institutions are involved in the design of A.I. systems that will shape the distribution of benefits and harms across society. For example, the ability to immigrate to the United States is unequally distributed among different groups of people through a combination of laws passed by the U.S. Congress, software decision systems, executive orders that influence enforcement priorities, and so on. Recently, the Department of Homeland Security had an open bid process to develop an automated ‘good immigrant/bad immigrant’ prediction system that would draw from people’s public social media profiles. After extensive pushback from civil liberties and immigrant rights advocates, DHS announced that the system was beyond ‘present day capabilities’. However, they also announced that they would instead hire 180 positions for people tasked to manually monitor immigrant social media profiles from a list of about 100,000 people. In other words, within the broader immigration system, visa allocation has always been an algorithm, and it is one that has been designed according to the political priorities of power holders. It is an algorithm that has long privileged whiteness, hetero- and cis- normativity, wealth, and higher socioeconomic status.(Sasha Costanza-Chock.,2018)

Intersectionality is thus an absolutely crucial concept for the development of A.I. Most pragmatically, non-intersectional algorithmic bias audits are insufficient to ensure algorithmic fairness. While there is rapidly growing interest in algorithmic bias audits, especially in the Fairness, Accountability, and Transparency in Machine Learning community, most are single-axis: they look for a biased distribution of error rates only according to a single variable, such as race or gender. This is an important advance, but it is essential that we develop a new norm of intersectional bias audits for machine learning systems.

Works Cited

Joy Buolamwini, & Timnit Gebru (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of Machine Learning Research at Conference on Fairness, Accountability, and Transparency, pages 1- 12, 2018 Retrieved from : http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.

Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186, 2017. Retrieved from: https://science.sciencemag.org/content/356/6334/183.abstract

Brendan F Klare, Ben Klein, Emma Taborsky,Austin Blanton, Jordan Cheney, Kristen Allen,Patrick Grother, Alan Mah, and Anil K Jain. (2015) Pushing the frontiers of unconstrained face detection and recognition: Iarpa janus benchmark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,pages 1931–1939, 2015. Retrieved from:

https://ieeexplore.ieee.org/document/7298803

Sasha Costanza-Chock (2018), Design Justice, A.I., and Escape from the Matrix of Domination. MIT Media Press, 26 July 2018, Retrieved from: https://jods.mitpress.mit.edu/pub/costanza-chock

Oscar H. Gandy, Matrix Multiplication and the Digital Divide, Race after the Internet, Chapter 6, Retrieved from : Race Folder in lms

Emrys Schoemaker, Digital purdah: How gender segregation persists over social media, Retrieved from : Gender Folder in lms

--

--

Atibhi Agrawal
Atibhi Agrawal

Written by Atibhi Agrawal

Software Engineer @Amazon London

No responses yet