Why Regulating AI and digital health Bias in Healthcare Innovation Matters
This article is of interest to innovators whose technology incorporates AI which may have different levels of performance accuracy depending on the availability and access to population data.
Health inequality is a persistent global issue (Stuart and Soulsby, 2011), and it is increasingly being shaped by how digital technologies—especially AI—are designed and deployed. For med tech innovators working in digital health and AI as a Medical Device (AIaMD), the challenge of creating fair and inclusive solutions is no longer a secondary concern—it is central to clinical safety, market access, and public trust.
One powerful example of this challenge lies in dermatology. Studies have shown that AI models used to diagnose conditions like melanoma often underperform on people with darker skin tones, largely due to underrepresentation in training data (Liu et al., 2020). Despite achieving high accuracy in controlled trials (Brinker et al., 2019; Han et al., 2020; Philips et al., 2020), these models fail to generalise across diverse populations. The consequences are not abstract: African Americans, for instance, are 1.5 times more likely to die from melanoma than their white counterparts.
This is not just a technical issue—it is a regulatory one. Without clear requirements for dataset diversity, model explainability, and bias reporting, even well-intentioned innovations can perpetuate health disparities. Relying on outdated classification systems like the Fitzpatrick skin tone scale only compounds the problem. More inclusive frameworks, such as the Monk Skin Tone scale (MST), exist—but they are not yet widely adopted or required in regulatory review.
Objectives:
Identify software development strategies that address bias at every stage and transparently report variations in performance across population groups.
Develop and run an online survey exploring trust and reliance on digital health and AI-based solutions for skin health, ensuring inclusive participation.
Share the findings of this research with both innovators and regulators to inform responsible product development and evidence-based policy.
If you are interested in this topic, RADIANT-CERSI would like to hear from you. Please answer the questions below and leave your contact details to continue the conversation.