Explainable AI – why demystifying the magic needs good communications – PMLiVE

admin
3 Min Read

Imagine if I told you I had invented an artificial intelligence (AI)-driven digital tool that was 95% effective at identifying which patients would develop a rare but serious side effect of an innovative cancer therapy. By using this tool, clinicians could ensure the potentially costly new treatment was being used only for patients who stood to benefit and avoid unnecessary treatment complications.

Sounds good, doesn’t it? Sounds like something that could improve patients’ lives, help healthcare professionals make more informed decisions and make choosing to invest in a novel therapy a far less daunting prospect for a health system.

What if, however, I couldn’t tell you how or why my AI tool came to its conclusions? What if I told you it worked – even showed you the data to support its effectiveness – but I couldn’t explain to you why it worked.

Would you trust it?

As pharma and biotech companies embrace the extraordinary potential of AI to accelerate research and development and make breakthroughs in diagnosing and detecting disease, this is a question the innovation community can’t afford to ignore. Explainability is an increasingly hot topic within the searingly hot topic of AI – and an area in which I believe healthcare communicators need to start building their understanding and upskilling themselves.

Explainability can be understood as a characteristic of an AI system that makes it possible for us to reconstruct why the AI came up with a specific prediction. Think of it as the AI equivalent of a maths student being asked to show his or her workings so an examiner can be confident that the understanding behind the answer is sound – and that the student hasn’t hit on the right solution by guesswork or luck.

Healthcare communicators – especially those of us focused on innovations comms – are skilled in translating the complex language of early-stage research into meaningful messages to build trust in novel approaches.

We now have a unique opportunity to evolve these skills so they can play a part in explaining and building trust in AI with clinicians, regulators and patients. Because without explanation there can be no understanding, and without understanding there can be no trust. And without trust, there is a risk that the astonishing value of AI for patients will be left on the table.

This thought leadership piece appeared in the May edition of PME. Read the full issue here.

Share This Article
By admin
test bio
Please login to use this feature.