We Need More Evaluation Criteria for Using Artificial Intelligence in Global Health
This year’s technology buzzword is artificial intelligence (AI), which means you’ve already been asked how your organization can incorporate AI and machine learning in your health programming. You may already be using aspects of AI to augment and enhance, not replace, activities such as running natural language chatbots and utilizing pattern recognition satellite imagery.
Yet, as AI is such a new technology, there are few, if any, resources available to thoroughly evaluate the what, where, and how of using it in our global health programs. So far, there are four strong publications to ground our thinking about this new technology:
Artificial Intelligence in Global Health, from USAID and other donors
Making AI Work for International Development, from USAID
Responsible AI Practices, from Google
Trusted Artificial Intelligence, from IBM
While each of these publications advances our understanding of AI, we are still missing a foundational document.
Questions to ask about artificial intelligence activities
We need to have a set of criteria to evaluate how we are designing and developing AI systems to ensure that we are being responsible with this new technology, evoking the simplest and strongest ethical code: do no harm.
That was the focus of the Technology Salon on How to Evaluate Artificial Intelligence Use Cases for Development Programs. As part of the event, we developed an evaluation framework for artificial intelligence solutions with guidance from these thought leaders:
Adele Waugaman Senior Advisor, Digital Health, USAID
Priyanka Pathak, AI for Development Course Facilitator, TechChange
Shali Mohleji, Technology Policy, Government and Regulatory Affairs, IBM
Richard Stanley, Senior Technical Advisor, Digital Health, IntraHealth International
Salon members helped draft an AI evaluation framework that built on the Principles for Digital Development to create an approach we can all use in our international development programming.
We want your input to improve this document, which will serve as the foundation for a future publication.
Humans are still central to artificial intelligence
The need for human input and control in every aspect of AI activities flowed throughout the Technology Salon and comes through in the draft AI framework. Core ideas included:
It’s our responsibility explain AI. As development practitioners and technology experts, it’s our responsibility to make sure that AI applications and their components (data, algorithms, output) are explained in a way that our constituents understand.
We should augment humans, not replace them. We need to focus the conversation on how AI can augment human decision-making and enhance our reach, building on the much-needed human touch. This is counter to one current narrative that AI is made to replace human efforts.
Data divides drive many concerns. Like digital divides, there are many data divides. One of the largest is the basic lack of data on the constituents that we’d need for training, using, and validating AI, therefore driving the use of proxy data, which can radically increase bias in results.
As AI rises up the hype cycle to the peak of inflated expectations, we need to continue discussions like this one to make sure we can utilize AI for the good of the global health ecosystem.