A living systematic review of bias mitigation methods in Natural Language Processing for equitable healthcare AI
This article may be relevant to your organisation if you are using or plan to use natural language processing.
Bias in natural language processing (NLP) systems used in healthcare is a growing concern, particularly when applied to free-text data that reflects the experiences of underserved and marginalised populations. While NLP offers a scalable solution to analyse large volumes of unstructured data, these systems risk perpetuating existing disparities if their development and deployment are not informed by fair, inclusive practices.
This project is part of HUMBLE- Human-centred Method for
Bias-reducing Algorithms with NLP and Qualitative analysis for Improved Health
Outcomes of Underserved and Marginalised Populations. It aims to develop a living systematic review that continuously tracks and synthesises emerging research on bias detection and mitigation strategies in NLP. By building a structured, evolving repository of methods, we will support the responsible and equitable adoption of NLP in health and care settings.
We want to make sure that the results of the review are useful, and we aim to create tools and techniques that RADIANT community members can apply in their work. We want to find out how helpful and easy to use our tool is. Your feedback will help us improve the tool so it’s more practical, accessible, and useful for real-world work—especially when fairness and equity matter.
Aim - To build a practical, continuously updated repository of NLP bias mitigation methods that supports equitable AI development in health and care.
Objectives:
• Conduct a living systematic review of NLP bias mitigation methods, with an emphasis on those relevant to healthcare and social inequalities.
• Develop a structured, publicly accessible repository that categorises methods by type (e.g. pre-processing, in-processing, post-processing) and application context.
• Contribute to the development of best practice guidance for reducing algorithmic bias in NLP applied to health and care systems.
If you are interested in this topic, RADIANT-CERSI would like to hear from you.
Please answer the questions below and leave your contact details the to continue the conversation.