Let us know what type of content you'd like to see more of. Fill out our three question survey.
Shaping History Series: Innovative Leaders Driving Digital Transformation—Dr. Joy Buolamwini
This post is one of a series of posts on Shaping History.
Feb 17, 2022
Aligned with the Center for Digital Acceleration’s commitment to fostering equity and inclusion within our team and the broader international development sector, Digital@DAI is launching a new series during Black History Month called Shaping History: Innovative Leaders Driving Digital Transformation. The series aims to spotlight racially diverse leaders in the tech space at home and in the countries where we work.
Without further ado, meet our first profile subject: Dr. Joy Buolamwini.
Poet of Code
Computer scientist and activist Dr. Buolamwini is one of the world’s leading voices examining algorithmic harms and biases, with a particular focus on racial and gender bias in artificial intelligence (AI). As told in the documentary Coded Bias, Dr. Buolamwini, who is Black, became interested in this area when popular facial recognition software did not recognize her face until she literally put on a white mask. In part from this experience, her doctoral research at the Massachusetts Institute of Technology developed the concept of an “evocative audit” rooted in “black feminist epistemology, intersectionality, and the outsider within standpoint.” Unlike traditional algorithmic audits intended to be de-personalized examinations of whether algorithms are actually doing what they are intended to do in a narrow technical sense, evocative audits are deliberately designed to show the human impact of algorithmic systems on individual people. In an interview with The Markup, Dr. Buolamwini said of evocative audits: “The whole point of doing this is to actually invite you to empathize. What does it mean to face algorithmic harm or to experience machine erasure or denigration in some way? That’s what the evocative audit does. It draws you in to show why these systems matter and how they can impact people in a negative way.”
While working on her doctoral research, Dr. Buolamwini’s other work launched her onto the national stage. Most notably, her 2018 paper with Dr. Timnit Gebru on racial and gender bias in facial recognition algorithms has been cited more than 2,400 times in other works, all in less than four years since its initial publication. Stemming from her collective body of work, Dr. Buolamwini has been named to Time’s 100 Next, Forbes’s 30 Under 30, and MIT Tech Review’s 35 Under 35, among others. As a result of this recognition for her groundbreaking work, she has become a public intellectual in the technology space, regularly highlighting algorithmic harms that will materially impact individual people, especially people of color.
Most recently, in January, she wrote an op-ed for The Atlantic about the Internal Revenue Service’s (IRS) proposed use of facial recognition on its website. Her article launched a broader conversation around the use of facial recognition to access government services in the United States, which quickly led to the IRS distancing itself from its original plans.
In addition to her academic work, Dr. Buolamwini is an artist—specifically a poet of code. Her spoken-word piece “AI, Ain’t I A Woman?”—viewed almost 100,000 times on YouTube—uses Sojourner Truth’s famous question “Ain’t I A Woman?” as a launching point to show how commercially available facial recognition technologies misgender well-known women of color, especially Black women. In her interview with The Markup, she also explicitly identifies this piece as an evocative audit, showing how evocative audits can take a wide variety of forms.
Photo: Megan Smith (former Chief Technology Officer of the USA), AJL.org.
Implications of Dr. Buolamwini’s Work for ICT4D
Dr. Buolamwini’s work has multiple potential applications and implications for the international development sector. A few initial thoughts:
Be wary of facial recognition and other biometric identification tools. Even though biometrics are already relatively widely used within the international development sector (especially in the humanitarian space), Dr. Buolamwini’s work highlights that biometrics such as facial recognition have serious limitations, including in emerging markets. International development is still largely Global North-driven. If facial recognition technology used in the Global South is developed in the Global North using training datasets consisting of primarily white faces, the AI will not work as intended. Even leaving aside the thorny ethical questions of informed consent and data privacy, deploying facial recognition that does not work well on the target population defeats the purpose of using facial recognition at all. For organizations that want to use facial recognition in the Global South, there are two practical options: either deploy training datasets featuring people who are from the country where you are working, or better yet (for all the reasons embodied by Dr. Buolamwini’s work on algorithmic harms and bias), do not use facial recognition systems at all.
Institutionalize identifying fighting algorithmic harms and bias in your digital tools, centering people in digital development. Dr. Buolamwini’s academic work has primarily, though not exclusively, focused on facial recognition. Her body of work points to the need to identify and mitigate the effects of algorithmic harms and bias in the digital tools that we use in our work. Even though algorithmic bias and algorithmic harms may seem like a high-level, abstract concept, algorithms hold real power, from the jobs we hold to where we live to the money available to us. The Global South is not exempt from these dynamics, nor is (especially) the international development sector’s work in the Global South. However, Buolamwini’s work with evocative audits—described in her Ph.D thesis abstract as “an approach to humanizing the negative impacts that can result from algorithmic systems… [and] that allow others to bear witness to issues created by algorithmic systems of interest”—reminds us to center individual people and their lived experiences when it comes to algorithms. Adapting the idea of an evocative audit for the digital development sector, centering the experiences of individual people in the Global South by examining how they interact with algorithms, and how those algorithms affect their lives are significant steps toward countering algorithmic bias and harms within the international development sector.
Dr. Buolamwini’s website is here and you can follow her on Twitter, Instagram, LinkedIn, and Medium.