Let us know what type of content you'd like to see more of. Fill out our three question survey.
A Look Back at 2024: Highs and Lows
Jan 9, 2025
The Center for Digital Acceleration takes a look back at an eventful year in the world of digital development, both for us at DAI and the global digital and donor ecosystem.
Policies, Principles, Compacts
Last year marked the (re)launch of numerous policy documents, global principles, and agreements that will shape priorities in digital development for decades to come. Led by the Digital Impact Alliance, the Principles for Digital Development received a long-planned refresh for the first time since their launch in 2014, the product of numerous global consultations and workshops.
In September, we saw the adoption of the United Nations (UN)’s Global Digital Compact, one of the first modern “comprehensive framework[s] for global governance of digital technology and artificial intelligence” at the UN level, and one of the first to include AI as core area of focus. While not without its detractors, the compact is an important step forward in creating a common framework for cooperation around “an inclusive, open, sustainable, fair, safe and secure digital future for all.”
Finally, the U.S. Agency for International Development (USAID) released two important policy documents: the first is its inaugural Digital Policy (2024-2034). The policy updates USAID’s Digital Strategy, which was launched with the help of the DAI-led Digital Frontiers. (On a related note, 2024 marked the end of the seven-year Digital Frontiers project, and we couldn’t prouder of the project and its achievements. You can access the project’s public resources, toolkits, and reports here). The second policy of note is USAID’s Democracy, Human Rights, and Governance Policy, which makes extensive mention of digital technology, including one of its major ‘policy pivots’ to “advance digital democracy by supporting rights-respecting approaches to data and technology.”
Advancing a Global Rights-Centered AI Ecosystem
In 2024, several pivotal developments aimed to advance human rights considerations in the use of AI, reflecting a global commitment to responsible AI governance:
One of this year’s key milestones was the Council of Europe Framework Convention on Artificial Intelligence, the first legally binding treaty on AI, which promotes human rights, democracy, and the rule of law. UNESCO also launched its Guidelines on the Ethical Use of AI in Judicial Systems at the WSIS+20 Forum, ensuring fairness and transparency in AI applications within the judiciary. At the global level, the UN created the High-Level AI Advisory Body, which launched the Final Report—Governing AI for Humanity in September. Meanwhile, platforms like RightsCon emphasized the risks of AI perpetuating discrimination, deepening inequities, and affecting digital freedoms, highlighting actionable safeguards to mitigate these risks.
In 2024, there was a heightened emphasis on AI safety, marked by the establishment of new AI safety institutes and the growth of initiatives led by organizations in the United States, the United Kingdom, Singapore, and Japan. Additionally, the newly created EU AI Office, formed under the EU AI Act, is set to concentrate on developing best practices in the field.
The Global Index on Responsible AI (GRAI) evaluates the progress of 138 countries in adopting responsible AI practices through a human rights perspective. GRAI highlights significant gaps in the global progress toward responsible AI, with governance frameworks often failing to translate into meaningful protections for human rights. According to the GRAI, two-thirds of countries scored below 25 out of 100 in the Index, reflecting inadequate measures to protect human rights and promote responsible AI. Nearly six billion people live in countries lacking sufficient safeguards.
These efforts collectively aim to mitigate risks such as surveillance abuse, algorithmic bias, privacy violations, and discrimination while fostering trust, accountability, and equitable digital development. In the long term, they hold the potential to create a globally rights-centered AI ecosystem, though bridging the North-South divide in governance and decision-making remains a crucial challenge.
The (Continued) Rise of Geo-Politically Motivated Cybersecurity Attacks
Geo-politically motivated cyberattacks on governments, telecommunications, healthcare providers, financial institutions, and other critical infrastructure sectors are on the rise. The Center for Strategic and International Studies published a timeline of significant cyber incidents in 2024 and Google’s 2025 Cybersecurity Forecast Report also predicts that geopolitical conflicts will continue to be a major driver of cyber threats targeting governments, defense sectors, global enterprises, and critical infrastructure. Threat actors, both state-sponsored and hacktivists in support of a state’s interests, are becoming more advanced, using a combination of cyber espionage, malware, AI-driven social engineering and phishing attacks, disinformation operations, data theft, and ransomware.
Significant 2024 events included:
- In December, China-affiliated Salt Typhoon threat actor breached more than eight major U.S. telecom companies and stole the personal data of millions of U.S. citizens including politicians and public officials. In October, Chinese hackers also conducted cyber espionage and large-scale operations to extract data from Canadian, Thailand, German, and U.K. government agencies. On December 12, the U.S. Critical Infrastructure Security Agency and Federal Bureau of Investigation in partnership with Australia, Canada, and New Zealand national Cybersecurity Centers issued joint guidance warning of Chinese cyber espionage on global telecommunication networks.
- In January 2024, Russian cybercrime group hacked Medibank, one of Australia’s largest healthcare providers, to obtain and sell the personal and health data of 12.9 million individuals.
- Russian military intelligence increasingly targeted the personal devices of Ukrainian soldiers and defense officials to obtain tactical data on military operations. The government of Ukraine’s 2024 investigation on the December 2023 Kyivstar cyberattack, attributed to Russian intelligence group Sandworm, left millions of Ukrainians without internet access and cellular networks while disrupting banks, ATMs, and air raid systems, showed that not only was the threat actor able to breach the system but that they were likely inside for months gathering data.
This year, Google anticipates that threat actors will use AI to create more sophisticated cyberattacks such as more convincing phishing and social engineering attacks and deepfakes. These trends demonstrate the growing need for cyber resilience and data protection across all sectors. Cybersecurity is not only a concern for technology organizations, but rather an essential component of securing the operations of governments, hospitals, schools, banks, and other institutions that are critical for everyday functions. There is an opportunity for development programs to mainstream cybersecurity and data protection as a component of existing capacity-building efforts and not as a stand-alone consideration for digital development projects.
In response, donor governments, bilateral, and multilateral institutions are taking action to build cybersecurity into the global development agenda. The UN, EU, U.S. Department of State, Microsoft, Visa, Interpol, and others are among the endorsers of the Accra Call to mainstream cyber resilience and capacity building into development efforts. In May 2024, the U.S. Department of State released the International Cyberspace and Digital Policy Strategy. The strategy recognizes that cyber threats are pervasive in the changing geopolitical environment. It calls for building cyber resilience by securing infrastructure, promoting responsible and human rights-respecting state behavior, and advancing digital solidarity by strengthening multi-and bilateral partnerships (see this interview with Chris Painter). The UN’s new Cybercrime Convention was launched in August 2024, the “first global legally binding instrument on cybercrime.” However, the final agreement is highly contentious, with civil society and advocacy groups arguing that it can be used to further digital repression globally, with groups like AccessNow calling it “overbroad in its scope of criminalization.”