in

Not seeing results from artificial intelligence? Geometry may be the missing piece

There is no doubt that one of the hottest topics in healthcare at the moment is artificial intelligence. The promise of AI is exciting: it helped identify cancer images in x-rays, found diabetes through retinal scans and predicted patient mortality risks, to name but a few from the medical advances it could provide.

But the pathways that healthcare systems are heading to make AI a reality are often flawed – leading to AI scavenging without measurable outcomes. When the wrong path is taken, they end up with AI “solutions” to the perceived problems without being able to verify whether the problems are real or measurable in reality.

Vendors often run AI solutions … and then turn away, leaving health systems unsure of how to use these new ideas within the boundaries of legacy workflows. These tools are often deployed without engineering precision to ensure this new technology is testable or flexible.

The result? These potential insights of AI are often overlooked, marginally helpful, quickly obsolete, or – at worst – harmful. But who knows?

One popular AI solution that is often a source of excitement among health systems and vendors alike is the early detection of sepsis.

In fact, finding sepsis patients was my first assignment at Penn Medicine. The idea was that if we could find patients at risk of sepsis early, there are treatments that could be applied, which (we thought) saved lives.

Coming from a background in missile defense, I naively thought this would be an easy-to-create mission. There was an analogy between “Find My Rocket, Shoot My Rocket” that sounded intuitive.

My team has developed one of the best sepsis models ever created. [1] Validated and published, it resulted in more lab testing and faster ICU transfers – yet it did not result in any changes to patient outcomes.

It turns out Penn Medicine was really good at finding sepsis patients, and that this modern algorithm was not, in fact, required at all. Had we gone through the entire engineering process now in place at Penn Medicine, we would not have found any evidence that the original problem statement, “finding sepsis patients” was a problem at all.

This engineering design effort would have saved several months of work and deployed a system that ultimately resulted in a distraction.

Over the past few years, vendors and health systems alike have made hundreds of claims of successful applications of AI. So why was only a handful of the resulting studies able to show the actual value? [2]

The problem is that many health systems attempt to solve healthcare problems simply by configuring vendor products. What this approach misses is the engineering precision needed to design a complete solution, incorporating technology, human workflow, measurable value, and long-term operational capability.

Often the resource approach is isolated first, as independent teams are assigned separate tasks, and completing these tasks becomes how project success is measured.

Success, then, is highly dependent on tasks, not on value. Connecting these tasks (or projects) to the actions that really matter – saving lives, saving dollars – is difficult, and requires a holistic engineering approach.

An understanding of whether or not these projects are running, how successful they are (or whether they are needed to start with) is not measured. The incomplete way to look at it is this: if the AI ​​technology is deployed, success will be claimed, and the project is complete. The geometry required to define and measure value is non-existent.

Getting value from AI for healthcare is a problem that requires an accurate, thoughtful, and long-term solution. Even the most useful AI technology can suddenly stop performing when hospital workflow changes.

For example, Penn Medicine’s readmission risk model suddenly showed a slight decrease in risk scores. The culprit? An unintended change in the EHR configuration. Due to a complete solution design, the data feed was monitored and teams were able to quickly communicate and correct EHR change.

We estimate that these types of situations arise approximately twice a year, for each predictive model published. Therefore there is a need for continuous monitoring of the system, workflow and data, even during operations.

For AI in healthcare to reach its potential, health systems must expand their energies beyond clinical practice, focusing on the full ownership of all AI solutions. Rigorous engineering, with clearly defined results linked directly to a measurable value, will be the foundation upon which all successful AI programs will be built.

The value should be determined in terms of lives saved, dollar savings, or patient / clinical satisfaction. The health systems that achieve success from AI will be the ones that carefully identify their problems, measure the evidence for those problems, and form experiments to link the presumed interventions to the best outcomes.

Successful health systems will understand that rigorous design processes are essential to properly scale their solutions in operations, and are willing to consider both technologies and human workflows as part of the engineering process.

Like Blockbuster, which has now famously failed to rethink the way movies are presented – health systems that refuse to see themselves as engineering homes are at risk of being seriously lagging in their ability to properly leverage AI technology.

Making sure your websites and email servers are working is one thing, and it is quite another to ensure that the health system improves care for heart failure.

One is an IT service, and the other is a complete product solution that requires an extensive team of clinicians, data scientists, software developers, and engineers, along with clearly defined metrics for success: lives and / or money savings.

[1] Giannini, HM, Chivers, C, Drauglies, M, Hansh, A, Fox, B, Donnelly, B, and Mickelson, MI (2017). Development and implementation of a machine learning algorithm for early identification of sepsis in a multi-hospital academic healthcare system. The American Journal of Respiratory and Critical Care Medicine. 195.

[2] Digital Rebuilding Healthcare, John Halamka, MD, MS & Paul Cerrato, MA, NEJM Catalyst Innovations in Care Delivery 2020; 06

DOI: https://doi.org/10.1056/CAT.20.0082

Mike Droglis is the chief data scientist at Penn Medicine, where he leads Predictive healthcare Team.

What do you think?

Written by Joseph

Leave a Reply

Your email address will not be published. Required fields are marked *

How to press a seat: proper shape and tips

2021 Marathon updates and announcements