Avoiding the Same Mistakes: Understanding and Countering Bias in the Deployment of Artificial Intelligence for Humanitarian Assessments
Loading...
Date
2019-03
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
An effective response to humanitarian emergencies relies on detailed information about the needs of the affected population. In recent years, most primary data collection for this purpose has moved to handheld computer-assisted personal interviewing technologies, and—to a smaller extent—to computer-assisted telephone interviews. Natural language processing (NLP), a type of artificial intelligence (AI), provides radical new opportunities to capture qualitative data from voice responses of thousands of people per day, and analyze it for relevant content in order to inform humanitarian emergency decisions more rapidly. But this innovation, currently its pilot stages for deployment in Yemen, would rely heavily on opaque algorithms and training data to convert qualitative responses into data for operational planning purposes. Based on key informant interviews with engineers and humanitarian survey specialists, and a review of the latest proposals for countering bias in AI development, this paper provides an overview of the major ethical challenges related to deploying NLP in humanitarian emergencies. I demonstrate that previous quantitative data collection methods have a different set of biases that have become entrenched in humanitarian assessments, and how we may avoid similar mistakes in the age of more automation in data collection.
Description
AUTHOR AFFILIATION: Tino Kreutzer, York University, Canada, kreutzer@yorku.ca