This week marked the 71st anniversary of the Universal Declaration of Human Rights (UNDHR)—a founding document which set out, for the first time, fundamental human rights to be universally protected. Working in international development, most of us are committed to the 30 rights laid out in the UNDHR. As the digital revolution evolves, countries have adopted a myriad of legal frameworks to protect citizens in the digital space. These laws include those focused on privacy and data protection. However, often, these policies are not aligned with the most recent technological advancements.

In the humanitarian and development sectors, we are digitising faster than the legal and ethical frameworks governing this digitisation. Some fear that the technology sector remains virtually a human rights-free zone.

![Eleanor_roosevelt_human_rights_english_-Resize_548w.jpg.png](/uploads/Eleanor_roosevelt_human_rights_english-_Resize_548w.jpg.png)Eleanor Roosevelt and the UNDHR in November 1949. Photo: Wikimedia Commons.

To fulfill the mandate of the UNDHR today, we need to fully understand what this looks like in a digital world and mandate responsibilities and accountability—beyond nonbinding ethical frameworks and guidance. Indeed, in the recently released Contract for the Web, Tim Berners-Lee outlines three principles for the internet, one which aims to “respect and protect people’s fundamental online privacy and data rights.” He asserts that these protections must be underpinned by rule of law, and applicable to all people and data.

The need for these types of laws is clear, as examples of how technology is being used to violate human rights continue to grow. We’re all familiar with how social media platforms are used for surveillance or electoral manipulation, but the bigger ethical question this poses is whether the architecture of these platforms is itself the problem that eases the possibility of human rights violations. For instance, each of these social media platforms is governed by algorithms that manipulate our access to information and therefore may disadvantage one person over another in a job search or provide only one side of the story during civic strife. (If you’re interested in reading more about the impact of algorithmic bias, I highly recommend “Weapons of Math Destruction” by Cathy O’Neil). Since social media in many cases has become a public space, does its own architecture as a curator of information based on algorithms abuse our human rights?

What about biometrics? The case of Huduma Namba in Kenya is an interesting one: a national ID registration system that made biometric registration compulsory for access to basic goods and services. Citizens therefore had to decide between giving up their biometric data or losing access to government services. This was seen as a violation of the right to privacy, right to equality, and the right to nondiscrimination, as well as the right to public participation. Eventually, Huduma Namba was mandated by the court to make registration voluntary and benefits not conditional on registration.

The recognition that technology platforms can be inherently abusive, or can be used to disadvantage the vulnerable, is increasingly recognised. Accordingly there are numerous calls on developers to ensure that human rights are intrinsic to design, using processes such as human-centered design to properly assess risks, needs and wishes of the target audience.

thought-catalog-tRL_Rkh6D8o-unsplash.jpgPhoto by Thought Catalog on Unsplash.

We could share an endless list of recommendations covering the 30 rights, from privacy to freedom of expression. But here are five articles and reports that have sparked our thinking in this area:

  1. The report of the special rapporteur on extreme poverty and human rights on the digital welfare state presents the almost dystopian future we are stumbling into. It raises concerns that, despite numerous analysis of human rights implications of technologies such as artificial intelligence and biometrics, no protections are currently grounded in law.
  2. The Omidyar Network Ethical Operating System is a guide to anticipating the future impact of today’s technology. The aim of the guide is to help makers of technology, product managers, and others see problems before they arise. It aims to help with good design and provides a comprehensive list of potential risks that need to be thought about upfront.
  3. Adamantia Rachovitsa outlines why privacy—as a fundamental human right—is not just a human rights issue, but should be a design feature of technological solutions.
  4. This article argues that online gender-based violence cannot be addressed by government legal systems, but needs to be addressed by technology companies in their very design.
  5. This report on “Algorithms and Human Rights” by the Council of Europe does what I just didn’t have space for in this post. It goes through each of the human rights from “fair trial and due process” to “freedom of expression” and outlines how algorithms can impact on human rights. It’s a long read, but worth it for those wanting to really dig into the topic.

We hope to continue the conversation on human rights in the ICT4D space. Tweet to us at @DAIGlobal and @ChloeMessenger any reading you recommend!