From our social media feeds to the societal benefits we receive, artificial intelligence (AI) is everywhere and affects everyone. AI may assist us in a variety of ways: it can execute difficult, risky, or tedious tasks on our behalf; assist us in saving lives and coping with natural disasters; entertain us; and make our daily lives more enjoyable. AI helps doctors make decisions about our health and helps judges and lawyers sift through cases, speeding up the judicial process.

Governments worldwide are increasingly turning to AI algorithms to automate or support decision-making in public services. Algorithms assist in urban planning, prioritize social-care cases, make decisions about welfare entitlements, detect unemployment fraud, or monitor people in criminal justice and law enforcement settings. The use of these algorithms is often seen to improve efficiency and lower the costs of public services.

However, evidence suggests that algorithmic systems in public service delivery can also cause harm, violate human rights (through reinforcing discrimination and undermining the privacy of digital personal data), and frequently lack transparency and accountability in their implementation. This conundrum is exacerbated by the fact that AI technologies work with vast amounts of (personal) data and can have crossover and multiplicative effects that impact human rights issues and the rule of law. AI systems might reinforce what they have been taught from data and intensify risks, such as racial or gender/sex bias. There is mounting evidence that AI systems are far from being neutral technology. Instead, they can reflect their creators’ (un)conscious preferences, priorities, and prejudices. Even when software developers take great care to minimize any influence by their own biases, the data used to train an algorithm can be another significant source of bias. AI programs are susceptible to making judgment errors in novel situations. Consider the infamous instance when hundreds of eligible Dutch families were wrongfully implicated in fraud by an algorithm and made to repay social assistance. These households consisted largely of immigrants.

Policymakers, regulators, and civil society organizations are advocating for “algorithmic transparency” in public service delivery. Policies and regulations are implemented to address issues of AI systems’ transparency, explainability, and auditability, guard against discrimination, and enhance due process and personal data protection. Algorithmic transparency is about disclosing how algorithmic tools enable decision-making by public policymakers and regulators. This includes providing information on algorithmic tools and algorithm-assisted decisions in a complete, open, understandable, easily accessible, and free format.

While there have been some efforts to evaluate algorithmic accountability policies and regulations within particular institutions or contexts in the “developed world,” there have been few systematic and cross-jurisdictional studies of implementing these policies or their impact on human rights in developing countries. Existing literature in this nascent and fast-evolving space mostly focuses on developed economies. New research and the elaboration of an analytical framework are needed to generate informed insights applicable to developing countries.

Key Insights for Delivering Algorithmic Accountability

The newest research from DAI’s Center for Digital Acceleration (CDA) offers insights from select countries that public policymakers and international development donors can use to: (i) identify when AI deployment in the provision of public services might impact human rights, and (ii) ensure that appropriate and proportionate rights-ensuring “algorithmic accountability” elements are included in the delivery of public services.

The research is based on a literature review and key informant interviews in Brazil, Chile, Colombia, Egypt, Ghana, Kenya, Mexico, and Rwanda—countries that have witnessed increased deployment of automated provision of public services. Our key insights and recommendations for international development partners, policymakers, and regulators are summarized below:

1. A human rights-based approach is essential to building and governing trustworthy AI systems in public service delivery.

To ensure a rights-based approach in public sector operations, developing countries’ governments should have a readily accessible analytical framework to assist them in identifying when AI components might impact human rights and how algorithmic accountability could mitigate those risks. Where AI systems threaten human rights, countries should protect and promote those rights and ensure that private sector actors conduct due diligence and human rights impact assessments (HRIAs) according to their responsibility. The outcome of HRIAs should lead to identifying and assigning specific human rights safeguards to specific AI system risks and impacts. For instance, the United States’ Blueprint for an AI Bill of Rights has tried to approach AI accountability and transparency issues through a human rights perspective. A useful framework for conducting algorithmic impact assessments based on a human rights approach is the “Toward an Algorithmic Human Rights Impact Assessment Framework: the Example of the Impact Assessment, Fundamental Rights, and Algorithms Framework” of the Dutch Ministry of Interior and Kingdom Relations.

Based on our research, our CDA team has developed an analytical framework for algorithmic human rights impact assessment that international development donors and government officials can use throughout the lifecycle of AI tools to evaluate their impact on human rights in automated decision-making processes.

Screenshot 2023-05-11 at 10.28.57 AM.png DOWNLOAD THE ANALYTICAL FRAMEWORK HERE

This framework has been adapted from the following sources: Brookings, Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms; Singapore AI Governance Framework (2020); Government of the Netherlands, Fundamental Rights, and Algorithms Impact Assessment (FRAIA)

2. Simplicity, context, and trust are key to achieving algorithmic transparency in public service delivery.

International development partners, policymakers, and regulators should have a balanced approach to accomplishing algorithmic transparency and accountability in public service delivery commensurate with the risk and workload for civil servants. Simplicity, trust, and context were the top factors echoed throughout our interviews. If the algorithmic HRIA frameworks are too complicated, vague, and not easy to understand, public sector officials have difficulties implementing them. This was the case, for example, with the Digital Republic Act in France, which requires transparency for certain public algorithms. Still, at first, public agencies found it difficult to comply, partly because of a lack of resources and precise instructions.

3. Realizing and addressing the implementation gap is instrumental in achieving algorithmic transparency in public service delivery.

As one Colombian interviewee said, “Ethical frameworks and even laws won’t do much good if they are little more than words on a page.” Developing countries face greater constraints in implementing fair, accountable, and transparent algorithmic systems due to weak legal and regulatory infrastructure, law enforcement capacities, and low levels of digital and data literacy. Donors and international partners need to meet governments where they are, supporting holistic approaches and practical programming that supports deploying ethical and human-rights-based systems for algorithm usage.

For instance, in Ghana, interviewees cited a lack of government strategy and policies for AI usage, as well as a lack of quality data that is clean and usable for analytical or algorithmic purposes. While the use of algorithms for public service delivery is not yet widespread in Ghana, the lack of data interoperability, transparency in data usage, and weak regulations and policies undermine the ability to implement useful and fair algorithms.

4. Approaches to addressing algorithmic transparency in public service delivery must be tailored to local cultural, economic, and developmental contexts.

Like implementation in developed countries, the collection and use of data and rollout of algorithmic systems must account for a country or region’s specific cultural and demographic context. A thoughtfully designed, inclusive algorithm deployed in one country cannot be replicated in another with the same results. This sentiment was perhaps most prominent among interviewees in Kenya, who separately raised concerns regarding inclusion and fairness in relation to gender, tribal affiliation, marginalized communities and ethnic groups, and geographic location. For instance, one interviewee from Kenya explained that she still struggles with using Google Assistant, Alexa, and Siri because the platforms do not understand her Kenyan accent. While these platforms have been around for nearly a decade, they still have not demonstrated machine learning to accommodate different dialects, accents, and cultural nuances.

5. The success of government accountability and transparency in algorithm usage relies heavily on the ability of nongovernmental sectors to understand basic digital rights and data usage.

Chile, a high-income country, has demonstrated the power of a well-educated, free civil society in enforcing government accountability and transparency. GobLab, an innovation lab within the University of Adolfo Ibanez’s School of Government in Santiago, conducted extensive research into the Chilean government’s use of algorithms in collaboration with the Chilean Transparency Council. With funding from the Inter-American Development Bank’s Innovation Lab, the group later drafted and proposed regulation that the government is on track to adopt following initial testing of the regulation with various public bodies. The regulation will make Chile the first nation in Latin America to adopt standards on algorithmic transparency.

6. True accountability and transparency in algorithm usage by governments necessitate digital literacy, digital access, and digital rights awareness among the public.

If algorithms are transparent, individuals still need to have the ability to redress and understand the algorithms and use the resources available to them to remedy the situation. Governments and international donors should prioritize digital literacy, rights, and access efforts. Governments unable to assure accountability and transparency of an algorithm should reconsider if the algorithm is the most appropriate solution or incorporate non-digital means of explanation and redress.

We want to hear from you! Please send your comments regarding the proposed framework to [email protected].