AI In Health Care Will Fail Without Proper Context
“As a jazz musician, you have individual power to create the sound. You also have a responsibility to function in the context of other people who have that power also.” – Wynton Marsalis
We’re at a tipping point in health technology. Vast resources of data have been unlocked through the transition to electronic health records. Value-based care is requiring sophisticated analysis of patient outcomes. Machine learning and artificial intelligence (AI) technology have evolved quickly. All the tools are ready — it’s the hard part that comes next.
In my career in health care and oncology, I’ve been a chemist, a pharmacologist, an entrepreneur, an analyst, an academic, a researcher, a venture capitalist and a technologist. I’ve seen this “hard part” from many angles. Change does not come easily to health care. The systems of making decisions in our sector today are inefficient, full of human flaws and bias. This is certainly true in oncology, my company’s area of focus and also my brilliant wife’s specialty. I’ve come to realize that the problem isn’t technology but context. We need to be sharing a vision. We need to be translating information back and forth seamlessly between physicians, researchers, patients and computers to ask better questions and find better answers. The way to bring health care leaders together around AI is to invite them in through the proper contextual setting of findings.
How do we contextualize this? Earlier this year, I joined the National Academy of Medicine’s new Artificial Intelligence/Machine Learning in Health Care working group to tackle just that. Along with 35 other health care leaders, we’re outlining the promise, development, deployment and use of AI for policymakers, providers, payers, pharma, tech companies and patients. Every part of our healthcare system needs better translation:
• Physicians: Doctors are communicators, contextualizing their medical knowledge into care decisions and patient expectation. AI needs to understand and evolve within this framework, using physician expertise to ask informed questions from vast datasets. It shouldn’t stop there: AI should be presenting complex statistical recommendations to physicians in an easy-to-use format and closing the feedback loop with analysis of what worked. As health care evolves, the value of physician translation expands. Physicians will translate increasingly complex concepts to patients, as well as translating how medical expertise is applied in machine learning and how the practice of medicine transforms based on real-world data.
• Patients: Informed patients are already changing healthcare. Dr. Google is almost always a second opinion in the exam room. With the advent of machine learning, patient data literacy should also be a focus. Patients should and can be involved in care decisions: weighing risk, cost and discomfort based on real-world data about what works in their precise situation. It is key that patients can contextualize what they really care about to their providers.
• Payers: Did the patient get better? Was the treatment we approved the most cost-effective? Where can we reduce risk while still innovating? Health plans know today’s rising health care costs are not sustainable. Adding the payer context around reimbursement goals into the science can help reduce costs and improve outcomes. Payers could certainly be doing a better job of translating why they make decisions on denials and cost than the cryptic explanation of benefits letters or prior authorization denials mailed today.
• Life sciences: How to improve our drug discovery and clinical trial process is the subject for a book, not a bullet point, but it is fair to say that there are vast opportunities for better translation in the pharma, device and biotech sectors. I’m personally excited about improving how life sciences smartly select safe patient panels for clinical trials which reduces adverse events and uses real-world data to show that a small control trial translates to broad, diverse patient segments.
• Policymakers: Regulators play a crucial role in fitting AI into the healthcare ecosystem responsibly. They translate when AI models constitute medical devices and when they should be reimbursable. They will monitor issues around safety, liability and even unintentional bias if models are built on biased data. Like physicians, policy leaders are crucial interpreters and need to be translating needs between every part of our health care system effectively.
• Technology: Technology leaders absolutely share the translation burden here, too. Health technologies are often developed in a vacuum — far from listening to what physicians, patients, payers and pharma actually need. Early hype around AI promised miracle cures and delivered few results. To be effective, AI needs to be carefully trained to contextualize results into more practical “so-what” type of actions. It’s a conversation, and it’s not easy.
Until we can clearly and consistently answer “Should this patient take this drug, and will it work better than another one, and is it worth the price?” there is more context required to do it in health care. AI is just one tool to help us get there. It will only work as well as it is trained and understood.
As I prepare to head out to the American Society of Clinical Oncology (ASCO) in a few weeks, I’m thinking about my own role as an interpreter and translator in health care. I’ll be listening a lot at that event. I hope that if you have a translation role to play, you’ll share your perspective in Chicago or online. Together, through translating our complex needs, we can realize the potential of AI in health care.
POST WRITTEN BY