Let us know what type of content you'd like to see more of. Fill out our three question survey.
Artificial Intelligence in Healthcare for Development 4.0: Recommendations for Policymakers
Aug 10, 2022
Today, artificial intelligence (AI) is broadly understood to include not only long-term efforts to simulate the general intelligence humans exhibit, but also fast-evolving technologies—such as convolutional neural networks—that affect many facets of modern society, such as healthcare, national security, social media, agriculture, and more. The sweeping changes caused by AI have significantly increased the gap between government policy and innovative business models that rely on AI deployment. AI is changing norms and business models throughout society, demanding new and effective policy responses from governments on subjects with little real precedence. While governments struggle to adapt to the rapid pace of change, AI brings new solutions and offers the potential to transform how policy is made by providing new tools and methods of policy development.
As a result of the COVID-19 pandemic, policymakers across the globe have started focusing on implementing policies and regulations to ensure that possible long-term effects on inequality, exclusion, discrimination, and global unemployment do not become the “new normal.” In particular, vulnerable groups—such as older persons, people living in poverty, persons with disabilities, young people, and indigenous peoples—have been disproportionately affected by the harmful impacts of the pandemic and risk falling behind in the global recovery from the pandemic. If deployed correctly and with human-centric values at the core, AI can be a vital tool in improving global health, sustainability, and well-being—and bridging the inequality gap. However, not all countries around the globe are equally prepared for the impact of AI in healthcare.
Regulation of AI in healthcare is still in its infancy. Many countries and international organizations have just issued their national plans, guidelines, or codes—which often highlight essential principles for developing ethical AI—without having passed much substantive law. Notable examples include the European Union’s Ethics Guidelines for Trustworthy AI, the European Commission’s Proposal for a Regulation on a European approach for Artificial Intelligence, the United Nations Educational Scientific and Cultural Organization’s Recommendation on the Ethics of Artificial Intelligence, and the Organisation for Economic Co-operation and Development’s Council Recommendation on Artificial Intelligence.
AI deployment in healthcare potentially drives game-changing improvements for underserved communities and developing countries. From enabling community-health workers to better serve patients in remote rural areas to helping governments in developing countries prevent deadly disease outbreaks, there is growing recognition of the potential of AI tools to improve health access, quality, and cost. Health systems in many developing countries face obstacles, like shortages of healthcare workers, medical equipment, and other medical resources. There are numerous examples of big data and AI being used in healthcare in developing countries. In Gambia, a probabilistic decision-making system has been deployed to support rural health professionals in identifying life-threatening diseases in outpatient clinics, with AI detecting 88 percent of instances. Similarly, in South Africa, nurses have used computerized aid to treat in prescriptions based on a cost-effectiveness AI system. Kimetrica, now a part of the American Institute for Research, employed facial recognition technology in its machine learning tool, MERON, as a predictor of malnutrition in children under the age of 5. Kimetrica’s method is useful in low-resource areas, such as conflict zones, where it is impossible to send workers with large equipment to collect measurements.
However, the deployment of AI in resource-constrained settings has been surrounded by a lot of hype; and more research is needed on how to deploy and effectively scale AI solutions in health systems across developing countries. It is challenging to take disruptive technology innovations from developed countries and replicate them to address the unique needs of the developing world.
The quality and availability of healthcare services in developing countries lag behind developed countries. This lag leads to disparate health outcomes. According to the World Health Organization, more than 40 percent of all countries have fewer than 10 medical doctors per 10,000 people, and more than 55 percent have fewer than 40 nursing and midwifery personnel per 10,000 people. Only one-third to half of the global population could obtain essential health services as of 2017. Not having adequate digital and data infrastructure in developing countries impedes the prospects of AI deployment in healthcare settings.
Data provides the quantitative basis for the deployment of AI and digital resources. Data is the lifeblood of the digital economy and essential input for AI technologies. Many developing countries have been grappling with providing efficient delivery of essential healthcare services. To achieve this, health agencies need data about their populations to understand better the needs they must fill. The need for data to ensure efficient management and delivery of health services in low-resource environments has become increasingly important.
The Center for Digital Acceleration’s research is intended to identify both barriers to AI deployment at scale in developing countries and the types of regulatory and public policy actions that can best accelerate the appropriate use of AI to improve healthcare in developing countries contexts. AI technology cannot be considered a panacea for solving global health challenges, and scaling these technologies have risks and tradeoffs. Therefore, adoption, acceleration, and use of AI should strengthen local health systems and be owned and driven by local communities. Policymakers and regulators need to consider the following when deploying AI in developing countries’ healthcare systems:
- Health data constitute 30 percent of globally stored data. For health data to improve decision-making, the datasets need to be more diverse and robust. However, data access and data quality barriers still prevent developing countries’ innovators from using data more efficiently and effectively in health. The COVID-19 crisis has exposed these data gaps even further. Often, health data in developing countries are incomplete or of low quality, which may mislead policymakers in allocating resources effectively and efficiently. Health datasets are siloed and locked within institutions, and countries grapple with linking different data sources and using the data for secondary research.
- Apart from data access, data interoperability is another challenge. Often, health datasets are not interoperable and are not portable among institutions. This leads to less diverse health datasets that might not represent the patient population at the national level. Moreover, interoperability and standardization affect the building of regional and global health datasets and impede AI and machine-learning solutions across boundaries. COVID-19 has demonstrated the need for data interoperability and standardization.
- One of the key impediments to the effective deployment of AI in healthcare is access to findable, accessible, interoperable, and reusable data. In developing countries, this problem is exacerbated because data is not always digitized and not easily accessible due to private sector capture. It has been suggested that data quality issues are critical when building AI for clinical settings.
- Data representativeness is another problematic issue related to data quality. AI’s output is shaped by the data that is fed into it. Computer-based recommendations are often taken at face value, assuming that whatever result an AI algorithm portrays is objective and impartial. Humans choose the data that goes into an algorithm, and these choices can embed human biases, which in turn might negatively impact underrepresented groups. These biases might occur at any phase of AI development and deployment. The most common source of bias is data that does not sufficiently represent the target population, which can have adverse implications for specific groups. For example, women and people of color are typically underrepresented in clinical trials. If algorithms that analyze skin images were trained on images of white patients and then are applied more broadly, they could potentially miss melanomas in people of color. In another instance, a team of United Kingdom scientists found that almost all eye disease datasets come from patients in North America, Europe, and China, meaning eye disease-diagnosing algorithms are less certain to work well for racial groups from underrepresented countries. Regulators in developing countries should ensure that AI-powered tools can be applied equally to different groups of people.
- Even if diverse data sets are generated, this might not translate into AI tools rolled out reliably in low-income countries, where disease profiles differ from those in developed nations. To illustrate this, in sub-Saharan Africa, women are diagnosed with breast cancer younger, on average, than their peers in developed countries, and their disease is more advanced at diagnosis. Diagnostic AI tools trained on mammograms from Europe are trained to identify disease early in older women. These results and AI training sets could have devastating consequences if or when deployed in sub-Saharan Africa. Any regulatory framework should consider providing for both health and privacy rather than forcing a choice between them. AI technology will serve the welfare and well-being of developing countries only if certain safeguards, such as “human-in-the-loop” and “privacy by design,” are introduced.
- Don’t forget that data and algorithms are an integral part of the ability of machines to learn. The outcome of AI depends on the quality of data and algorithms. Data should not be biased, data ownership should be clearly defined, and algorithms should be transparent enough to identify the stakeholders’ liability. The responsibilities of all stakeholders need to be delineated to prevent damage and repair or compensate for harm in the worst-case scenario. A proper regulatory framework would ensure these properties for data, algorithms, and the whole of the AI process.
- Building trust in AI solutions in healthcare is about ensuring that AI tools meet demand on the ground and build trust and buy-in from the communities they are intended to help. Rolling out healthcare AI tools in developing countries requires an in-depth understanding of the existing bottlenecks in the healthcare system. For example, the AI that can identify people with tuberculosis from chest X-rays, primed for use in India, could save time, money, and lives in South Africa, especially in rural areas where there are no specialists to examine such images. However, to obtain images in the first place, communities will need X-ray machines and people to operate them. Failure to provide those resources will mean that AI tools will simply serve those already living near better-resourced clinics.
- No single country or stakeholder has all the answers to these challenges. International cooperation and multi-stakeholder discussion are crucial to developing responses to guide the development and use of trustworthy AI for broader public health.