Introducing the Winners of USAID’s Equitable AI Challenge

Artificial intelligence (AI) tools are a dual-edged sword: they promise tremendous benefits for international development, but have also demonstrated instances of bias and harm, often resulting from inequitable design, use, and impact. Recognizing that AI technologies can cause gender biases, the U.S. Agency for International Development (USAID) urgently looked for innovative and creative approaches to address gender-inequitable outcomes by launching the Equitable AI Challenge in the fall of 2021. This challenge, implemented through DAI’s Digital Frontiers, sought to support approaches that increase the accountability and transparency of AI systems in global development contexts. Dozens of competitors submitted approaches related to the prevention, identification, or monitoring of bias and harm against women and gender-nonconforming people—reflecting the larger goals of USAID’s Digital Strategy, the National Strategy on Gender Equity and Equality, and the recently launched USAID AI Action Plan.

USAID chose 28 diverse semi-finalists to attend a three-week virtual co-creation event, held between February 14 and March 1, which brought together select technology firms, startups, small, and medium enterprises, civil society organizations, and researchers from around the world. The co-creation focused on the need for close collaboration between the public and private sectors, which allows for diverse perspectives, local solutions, and partnerships to form among AI technology developers, investors, donors, and users. With a desire to address AI’s most critical issues, including bias and inequity within AI systems, participants were encouraged to collaborate on solutions, identify partnerships, and strengthen their proposals—all while forming a larger community of practice.

In April, USAID selected four proposals to receive grants to implement their approaches in alignment with the challenge’s objectives. Dive in and learn more about the winners of the Equitable AI Challenge as their work gets underway in the implementation phase.

AIGraphic.jpgUSAID launched the Equitable AI Challenge to help identify and address actual and potential gender biases in AI systems across global development contexts. Image: USAID

Accion’s Center for Financial Inclusion (CFI): Creating a Due Diligence Model for Investors and Donors

CFI proposes to develop a due diligence model that takes gender-inequity issues into account when designing algorithms for inclusive finance. The tool will help impact investors and donors push digital finance companies and product designers to build better processes for algorithm development and take into consideration the user and the user’s existing ecosystem and protection as design requirements. CFI’s approach is designed to support investors and donors when conducting due diligence and brings together people who are both creating and interacting with the algorithms. CFI expects to create a practical and flexible model that can be slotted into the due diligence and portfolio support processes of impact investors and donors. The proposed project will involve a collaboration with several impact investors and donors, including FMO: Dutch Entrepreneurial Development Bank, Accion Venture Lab, and Quona Capital, among others.

Carnegie Mellon University (CMU), Palladium, and the World Food Programme (WFP): Partnering to Create an AI Gender Fairness Decision-Support Tool

This partnership aims to develop an AI Gender Fairness Decision-Support Tool—a compendium of products that include a conceptual framework, training module, and code toolkit that have been vetted and refined through real-world applications. The project will build on existing resources, including CMU’s Ethics and Algorithms Toolkit and Dealing with Bias and Fairness in Data Science, IBM’s Fairness 360, Google’s What If?, and CMU’s Aequitas. With this tool, the project expects AI project teams and AI product consumers in the development and humanitarian sector to be able to regularly define gender equity objectives, measure gender inequity in their projects, diagnose the sources of gender inequity, and take actions to mitigate gender inequity. The partnership will leverage the WFP and Palladium’s network of ongoing projects for application of the toolkit. This includes incorporating the tool into the WFP Innovation Accelerator’s bootcamp curriculum to ensure that all future AI projects on-boarded into WFP’s innovation pipeline will prioritize fairness considerations as part of the early-stage design of their products and services. In addition, the tool will be incorporated in the data literacy component of WFP’s learning platform, WeLearn, and integrated with its global operational, technological, and resources environment.

Nivi and the University of Lagos (UNILAG): Partnering to Create a Gender-Aware Auditing Tool

In support of digital health interventions in Nigeria, UNILAG and Nivi are partnering to create a gender-aware auditing tool within their existing health chatbot technology deployment in Nigeria. The purpose of the audit tool is to evaluate real user interactions with the chatbot, along with the bot’s interpretation and response, incorporating user feedback on the adequacy of the bot’s response. The human judgements in this audit process serve two purposes. First, they can be aggregated into performance metrics both globally and segmented by demographic; second, they can be directly used as training data to retrain, tune, and improve the natural language processing (NLP) models used by the chatbot.

UNILAG and Nivi will first incorporate automated translation from low-resourced languages, such as Hausa, into English. This allows scalability of the information processing solution, as downstream models for information extraction and diagnosis can be leveraged in new languages without comprehensive retraining. The team also plans to utilize Yoruba and Igbo later in the project. UNILAG and Nivi will also apply the tool to improve both the existing NLP intent recognition modules of Nivi’s health guide chatbot and the new models developed by UNILAG through training and model testing. By building an NLP-based system that is more attuned to the needs of each local population, Nivi’s health chatbot and digital health services will have the ability to reach more women, at a lower cost and help them make informed health decisions.

RappiCard Mexico, the University of California, Berkeley, Northwestern University, Texas A&M University, and ITAM: Partnering to Research Gender Bias in Credit Allocation

This project will leverage technology to expand women’s access to credit and financial inclusion. RappiCard Mexico, the fintech arm of Rappi, is the leading delivery platform in Latin America, and currently has more than 500,000 active credit cards in Mexico. This collaboration will combine novel “digital footprint” data with repayment data and machine learning to build gender-differentiated, credit-scoring algorithms. This research will shed light on whether assessing credit worthiness using non-traditional sources of data, such as economic behavior and network interactions, via gender-differentiated credit scoring methods can benefit both borrowers and lenders. The study’s findings carry relevance for a wide range of digital credit products and can influence credit origination practices in countries that consider gender in credit allocation, and policy design in countries that do not currently allow it. This research will also inform the debate around whether gender-blind algorithms are superior in expanding formal credit for women and preventing discrimination of women applying for credit.

What’s Next: The Journey to Implementation

Through these diverse concepts spanning geographic regions and types of approaches—from improving AI fairness tools and data systems, to strengthening the evidence base for AI fairness in development contexts, to developing and testing more equitable algorithms—the winners of the Equitable AI Challenge will help USAID and its partners better address and prevent gender biases in AI systems in countries where USAID works.

Over the next year, these awardees will work with USAID and its partners to implement their approaches and generate new technical knowledge, lessons learned, and tested solutions in addressing gender -bias in AI tools. Through this implementation phase, USAID seeks to foster a diverse and more inclusive digital ecosystem where all communities can benefit from emerging technologies like AI, and—most importantly—ensure all members of these communities are not harmed by these technologies. This effort will inform USAID and the development community, providing a greater understanding of AI fairness tools and approaches, what they capture and what they leave out, and what tactics are needed to update, adapt, and socialize these tools for broader use.

Stay tuned as we share the ongoing progress of the challenge winners and build a stronger community of practice to learn together and work toward a more equitable AI-powered future.