Let us know what type of content you'd like to see more of. Fill out our three question survey.
Digital Identity Series Part 3: Barriers to Inclusion for All
This post is one of a series of posts on Digital Identity.
Jul 11, 2019
Any ID—whether state-issued or purpose-made for services such as cash transfers—in some way reflects levels of equality, whether these are inherent in the technology or prevalent in the social ecosystem. For instance, research from the GSM Association (GSMA) in Bangladesh, Nigeria, and Rwanda showed that citizens feel that equality in national ID is important as it reflects equality in citizenship status for men and women. If a biometric-based ID card is created in a refugee camp to facilitate cash transfers, every person living in that camp can rightfully expect to be registered regardless of age, gender, or ethnicity.
We list a number of ways in which bias can manifest in digital identities to kick off some discussion and move toward solutions. While reading this, I encourage you to think how we can make sure digital ID systems are inclusive.
Technological Bias
Some technology, such as biometrics and artificial intelligence (AI), can be inherently biased. Bias in facial recognition technology is well documented. Having been designed largely by a tech sector dominated by white men, the resultant technologies struggle to recognise other ethnicities—and particularly women within those—with alarming inaccuracy.
Fingerprints can be worn down by manual labour, rendering them unreadable; age-related issues like cataracts can mean older peoples’ irises cannot be read. These changes present challenges to those relying on their biometrics to identify them for services such as voting, receipt of cash transfers, or accepting food aid. Bias or inaccuracies in biometrics can be overcome by regular biometric tracking, or checking data at regular intervals to identify gradual changes. However, where biometrics form part of an identity card, regular checks are unlikely to happen among the marginalised, rural, and internally displaced people (IDPs) and refugees who are often mobile. Whether it is ethnicity, age, or gender, more errors take place among minorities where systems are built for the majority.
Societal Bias
Technology, however smart, is not exempt from human bias. Often the implementation of technology acts as a mirror image or amplifier of divides and biases in society.
The writeup from a recent Tech Salon in Washington, D.C., summarises this succinctly: “ID is embedded in your relationships and networks.” We need to understand what is going on at the individual level and how becoming more identifiable—and the systems by which this takes place—could impact on and be defined by a person’s relationships and networks.
It is no surprise that there is a complex gender narrative associated with identity. The above-mentioned GSMA study found a perception that men need the ID more than women, with ID being seen as something that professionals and business people have. Indeed, women and girls are often seen to have less need for an ID, as they can rely on their husbands’ or fathers’ ID for access to services. Caribou Digital’s research in Bangladesh found that the job a woman hoped to hold greatly affected her perception of the need for an ID card.
In its paper Identity at the Margins, Caribou Digital also outlines how systems that record beneficiaries at a household rather than individual level can impact household power dynamics. It cites the case of Uganda, where South Sudanese women who fled before male relatives were registered as the head of the household, giving them the title of official recipient of aid, but bringing violence to them from male heads of household who saw this as a disruption to the patriarchy (War Child research in Bidi Bidi).
Marginalised communities such as migrants are also often at the blunt end of bias when it comes to digital ID. For instance, faith in the robustness and reliability of the biometric system could mean marginalised groups are not trusted when it doesn’t work for them—reinforcing existing inequalities. GenderIT research in Kenya found a perception of your “last name betrays you” in terms of tribal politics, so the risk of this being embedded in a digital ID is scary for some in terms of the security of that data and who has access to it.
In our first blog on informed consent we spoke about how cultural beliefs can be a barrier to some biometric data collection. Barriers such as time, being expected to stay near home can also play a part in reinforcing social norms into the ID space.
Bureaucratic Bias
In my mental map of these biases, bureaucratic bias sits somewhere between biased tech and reflecting societal bias. It covers the way in which implementers collect and process the identity data and how this can reflect or exacerbate existing bias.
Data Society defines it as including the classification of vulnerable communities and the inconsistent collection of migrants’ identity information. By talking about classification, we refer to cases whereby individuals are classified by political or economic identifiers—for instance someone identified as an economic migrant may be treated completely differently to someone fleeing conflict. The inclusion of such a description as part of one’s digital identity can alter the way he or she is perceived and treated. Additionally, many migrants perceive biometric data collection as inherently connected to government and law enforcement—which many have learned to treat with skepticism, often rightly so.
Practicalities of the bureaucracies associated with registering for a formal identity can also exacerbate inequalities. GenderIT research in Kenya found that strict deadlines and the extensive travel required mean many women aren’t able to get the ID card. In a situation where someone feels unsafe traveling into the city, cannot afford to, or does not have permission from their head of household, the time taken to register and collect an identity card or sign up for biometrics may be too much.
Regulatory Bias
The Digital Identity Series Part 2 focused on politics, but I wanted to mention here an example of how regulations can exacerbate inequality.
Digital identity systems that rely on mobile technology are restricted by know-your-customer (KYC) requirements and biased towards those with ID cards—i.e. citizens. For instance, in Bangladesh, Rohingya refugees lack the required forms of ID to be registered with a digital identity. The Bangladesh Telecommunication Regulatory Commission has reportedly banned the sale of subscriber identity module (SIM) cards to Rohingya refugees and individuals have been arrested for selling mobile devices and SIMs to Rohingya. Also in Bangladesh, individuals can register up to 15 SIM cards, so many women rely on men to register for them, thus perpetuating gender barriers. This is not an isolated issue: 173 countries are hosts to 19.9 million refugees, yet according to the GSMA, 75 percent of these countries legally require people to present an acceptable proof of identity in order to register for a mobile SIM card.
My recent reading on the subject has flagged up these four “buckets” of barriers to inclusion in the digital ID space, but the list is by no means exhaustive. Research being done by thought leaders such as GSMA, Caribou Digital, and the Omidyar Network are always bringing us a step closer to understanding how we can design inclusive digital identity programmes. If you are working on digital ID, we’d love to hear from you! Please share your lessons with us on Twitter @DAIGlobal.