Last month, DAI’s Center for Digital Acceleration hosted a panel discussion during the NetHope Global Summit featuring artificial intelligence (AI) experts representing the public, private, and civil society sectors to discuss practical uses for facial recognition technology (FRT) and its impact on digital inclusion. The panelists presented specific case studies where they have seen their work intersect with AI and facial recognition technologies and digital inclusion, providing lessons learned for attendees to apply.

Meet Our Speakers

Jai-Vipra-Picture-JNC.jpgOur first speaker, Jai Vipra, is a Senior Resident Fellow at the Centre for Applied Law and Technology Research at the Vidhi Legal Center. Her work focuses on the economics of digital platforms and the regulatory implications surrounding emerging technologies.

In her presentation, Vipra shared her research on the use of FRTs, specifically closed-circuit television (CCTV) camera networks, for policing in Delhi, India. The study mapped police stations and CCTV camera networks alongside population demographics. She found that police stations in Delhi are spread unevenly throughout the city with some areas of Old Delhi being more policed than others. Overpoliced and oversurveilled areas had a large proportion of Muslim residents, demonstrating that the most surveilled communities or those most vulnerable to the negative impacts of FRTs in Delhi were Muslims. You can read Jai’s full research report here.

maddie.pngOur second speaker was Madeleine Stone, a Legal and Policy Officer with Big Brother Watch in the United Kingdom (UK). Stone’s work has largely focused on the impact of emergency COVID-19 regulations on civil liberties. She has worked with a range of organizations that promote freedom of expression in the UK and globally, including English PEN, Index on Censorship, and Lawyers Without Borders.

Stone presented on the present use of FRT in public spaces in the UK, including by schools, police, and shops. She spoke about the campaigns that Big Brother Watch manages, most of which aim to improve public awareness and knowledge of FRT operating in public space. She further discussed that although proponents of FRTs claim that the technology can be used to identify missing children or those who are being trafficked, there is very little evidence to suggest that the tech is used for anything more than policing.

danilo_podcast.jpgOur final speaker was Danilo Krivokapić, the Director of the SHARE Foundation, a digital rights organization in Belgrade, Serbia. A lawyer by training and education, his fields of work and expertise include data protection, the impact of data-driven business models on privacy, legal standards for information security, and cybercrime. He is also the founder of the “Thousands of Cameras” initiative whereby individuals and organizations advocate the responsible use of surveillance technology.

Like Vipra and Stone, Krivokapić used his work as a case study and spoke about the presence of FRT in Serbian public spaces. He additionally discussed the SHARE Foundation’s efforts to gather and disseminate resources to safeguard citizens’ online rights. He focused on the role of law and governance in their work and the resulting challenges that he and his organization face in organizing to regulate and mitigate the negative impacts of FRT in Serbia.

Key Takeaways

Following the presentations and dialogue between panelists, we came to the following conclusions:

  • FRTs in general have low accuracy when it comes to identifying individuals. This technology is more inaccurate when identifying individuals with marginalized identities.
  • FRTs, even if they were 100 percent accurate, should be used with extreme caution or not at all given the ethical and privacy concerns they present.
  • There is a lack of regulations guiding FRT implementation and the use of FRT for state surveillance. Such regulations are essential for the ethical use of FRTs.

While the novelty of FRTs might energize digital development practitioners, this discussion encourages us to take a closer look at the negative impact of emerging technologies and consider whether and how these technologies include a variety of users. In this case, it does not seem that FRTs are incompatible with digital inclusion.