When we conceived the idea of research on ethics and artificial intelligence (AI), we were—and still are—inundated with news about the detrimental impacts of AI from around the globe. Concerns about facial recognition technologies making false identifications and discriminatory hiring tools were amongst the many themes and headlines we saw. DAI’s Center for Digital Acceleration does its best to stay updated with the latest trends in technology and recognizes the benefits of automation and AI. However, these recent disturbing trends made it clear that the development sector needs to exercise greater caution when building and applying AI tools. AI is being adopted quickly in development, and we must act to prevent the adverse effects of these tools in the communities we work alongside.

Our newest CDA Insights paper explores where the development community currently stands with AI and what steps we need to take to ensure the safe and responsible use of these tools. At the heart of the paper is an adage that continues to echo throughout the development community—locally led and locally run solutions will result in better, more sustainable, and safer outcomes. The paper introduces how we can do this when considering AI tools for development. Below is a snapshot of our recommendations. Click here to read the complete publication.

Our Recommendations

  1. Develop or adapt an ethical AI framework aligned to country-specific perceptions of ethics: We recommend that the international development community use ethical frameworks as foundations for building AI tools— beginning with research to determine if existing ones are partially or wholly applicable to low- and middle-income countries (LMICs).

  2. Diversity data, designers, and decision-makers: Data is often pointed to as the problem with AI, but it is not the only issue. There is a lack of diversity in the training data used to develop AI systems and in the backgrounds of people who design AI systems and decide when deployed.

  3. Develop metrics to guide ethical AI implementation in LMICs: Frameworks are the first step, but they can only take AI ethics so far. We need clearer metrics to help AI designers and deployers determine if they are taking adequate measures to counter or mitigate AI bias.

  4. Cultivate partnerships between the Global South and Global North: Building on existing AI partnerships, especially North-South and South-South relationships, will create a community and nurture conversations that inform foundational research, data sharing, metrics, and technical assistance for governments and policymakers.

While we have seen successful use cases of AI tools in LMICs, we have also seen inadequate attention to its overall impact, effectiveness, and unintended consequences. The development sector and international donors are well-positioned to build on existing investments in digital development by investing to ensure the protection of those entering the digital world. As more communities enter the digital space, the development community must maximize the benefits of AI while mitigating its risks.