iStock_15693853_SMALL.jpg The terms “machine learning” and “artificial intelligence” (AI) conjure up feelings that are equal parts fear and fascination. Why is that? Until recently, the prospect of a piece of software making human-like decisions resided safely in the far-fetched expectations of 1960s-era computer scientists or the plot lines of science fiction novels. Today, however, after decades of unmet expectations, we finally have AI systems that are beginning to influence our lives in tangible ways. Voice recognition systems like Amazon’s Echo and Apple’s Siri, and once-unimaginable fantasies like self-driving cars, are on the market for consumers, with more exciting life-like systems to come. We have also seen a few early signs of robotic autonomy that makes us feel uneasy, like the Russian robot that learned how to escape the lab!

Machine learning (ML) is a branch of AI that relies on data. And we, the addicted users of the internet, have been providing the training data for the last 15 or so years. ML is indeed here and in our lives in many helpful ways. Spam filters learn by reading billions of spam emails. Language translation services on the web learn from gigabytes of translated text constantly fed into the system, Google’s self-driving cars are learning by driving millions of miles of roads, and Facebook can auto-caption images to help visually impaired users partake in social media. Even the Postal Service is using these new technologies: the U.S. Postal Service (USPS) uses ML to read hand-written addresses and automatically route them to their destination. Other parts of the U.S. Government are using ML to automate easy decisions. On the horizon are self-flying personal planes, because apparently automating a path through the air is easier than navigating a chaotic city street. Life-like writing is getting better, too. Can you tell who wrote these poems: human or machine?

The potential for ML applications in international development is huge. How? Let’s look at an example: The Google Photos App on my phone has access to 6,345 images. I have never tagged any of my photos and I don’t need to because the app classifies them automatically using ML-based techniques. By comparing the image structures of the photos on my phone with the millions Google already knows of, it can auto-tag them with keywords such as “beach,” “food,” or “group photo.” Here is what happens when I search with the beach keyword:

Screenshot_2016-07-13-09-20-50 (1).png.jpg Google’s Photos App uses ML to tag photos

Tagging each photo manually would have been time consuming and error prone. Similarly, this ML-driven clustering approach can be applied to tasks that would normally require hundreds of hours of mundane manual work. USAID’s Women in the Economy project is looking at using an ML technique called feature extraction as way to automate the alignment of unstructured CVs and resumes with available jobs, thereby facilitating the access of Afghan women to employment opportunities. This type of large-scale unstructured text processing is prohibitively slow by hand but entirely achievable with the help of software that make educated guesses regarding compatibility based on the frequency of relevant terms in a CV.

But don’t we always need a human in the loop? Well, yes, most of these systems still need human supervision. But before we get on our nonrobotic high horse, let’s reflect for a moment on the hilariously flawed ways we make decisions today. Parole boards make determinations differently later in the day, because of decision fatigue! Researchers go to enormous lengths just to prevent confirmation bias. Individual investors, with our own money on the line and against our own judgement, make poor investment choices based on cognitive biases such as sunk cost fallacy. Indeed, the list of cognitive biases, or patterns that lead to irrational decisions, is long. Is human judgement really that great? At the least, we would be wise to augment our decision making capabilities with some assistance from the rapidly evolving world of AI.