An Exploration of AI Explainability Techniques for Understanding Clinical Trust
This article is of interest to innovators whose technology incorporates or could benefit from Explainable AI (XAI).
There has been a large focus on explainability in the AI community, particularly around "deep" neural network models whose behaviour is almost impossible to fully understand due to their complexity. Focus on explanation has been driven by a lack of trust in complex automated algorithms by end users such as clinicians who often only see the recommended course of action without any chain of decision making or reasoning. The Explainable AI community (XAI) has sprung up as a result, which aims to address the lack of transparency around black box models that are too complex to understand. Many of the XAI approaches to deal with explainability have involved the development of post-hoc explanation facilities, essentially new algorithms that "bolt on" to existing black box models. There is a strong argument to focus on models that are inherently interpretable (Rudin 2019). An interpretable model is one that has the potential to be understood by the end-user due to the way it models data, say through training / education. By keeping humans as part of the data analytic process through interpretable models it is envisaged that there will be more trust in how machine learning algorithms are used leading to improved uptake and better-informed decisions.
This work will consider both white box (or glass box) algorithms where the model is more transparent, and black box algorithms with explanation facilities. In a structured work programme, regulatory, clinical and data analytics specialists will explore the level of transparency needed from a regulatory perspective and a clinical perspective, as well as what sorts of information/metrics would be useful for the medical device to be safely deployed. The aim is to identify practical, workable principles that will contribute to the possibility of introducing complex AI safely into the clinical pathway in the medium term.
Aim - to explore a number of key approaches to reporting AI decision-making when feeding back to clinicians.
Objectives:
Developing interfaces that capture feature ranking, counterfactual arguments, and decision confidence in AI decision making
Use a number of use-cases including cardio-vascular risk, diabetes, and cancer
Develop an online survey for clinicians at different levels of experience and of differing expertise
Report the findings of the survey to both innovators and to regulators
If you are interested in this topic, RADIANT-CERSI would like to hear from you.
Please answer the questions below and leave your contact details to continue the conversation.