Home  >  Article  >  Technology peripherals  >  How do we ensure healthcare AI is useful?

How do we ensure healthcare AI is useful?

WBOY
WBOYforward
2023-04-11 12:40:03971browse

In the grand scheme of the business of healthcare, predictive models play a role no different than a blood test, X-ray, or MRI: they influence decisions about whether an intervention is appropriate.

我们如何确保医疗保健 AI 有用?

"Broadly speaking, models perform mathematical operations and produce probability estimates that help doctors and patients decide whether to take action," said Chief Data Scientist at Stanford Health Care and Stanford University HAI faculty member Nigam Shah said. But these probability estimates are only useful to health care providers if they trigger more beneficial decisions.

"As a community, I think we are obsessed with the performance of the model, rather than asking, does this model work?" Shah said. "We need to think outside the box."

Shah's team is one of the few health care research groups to assess whether hospitals have the ability to deliver interventions based on the model, and whether the interventions will be beneficial to patients and health care organizations .

“There is growing concern that AI researchers are building models left and right without deploying anything,” Shah said. One reason for this is the failure of modelers to conduct usefulness analyzes that show how interventions triggered by the model can be cost-effectively integrated into hospital operations while causing more harm than good. ""If model developers are willing to take the time to do this additional analysis, hospitals will pay attention. ” he said.

Tools for usefulness analysis already exist in operations research, health care policy and econometrics, Shah said, but model developers in health care have been slow to use them. He himself The team tried to change this mentality by publishing a number of papers urging more people to evaluate the usefulness of their models. These included a JAMA paper addressing the need for modellers to consider usefulness, and a study paper, which proposes a framework for analyzing the usefulness of predictive models in healthcare and shows how it works using real-world examples.

"Like what hospitals might add to their operations Like anything new, deploying a new model must be worthwhile," Shah said. "There are mature frameworks in place to determine the value of the model. Now it's time for modelers to put them to use. ”

我们如何确保医疗保健 AI 有用?

Understand the interplay between models, interventions, and the benefits and harms of interventions

As shown in the figure above, the usefulness of a model depends on the The interplay between factors, the interventions it triggers, and the pros and cons of that intervention, Shah said.

First, The model—which often gets the most attention—should be good at predicting what it’s supposed to predict Anything, whether it's a patient's risk of readmission to the hospital or their risk of developing diabetes. Additionally, Shah said, it must be equitable, meaning the predictions it produces apply equally to everyone regardless of race, ethnicity, nationality or gender; and must be generalizable from one hospital site to another], or at least make reliable predictions about the local hospital population; furthermore, it should be interpretable.

Secondly, Healthcare organizations must develop policies about when and how to intervene based on tests or models, as well as decisions about who is responsible for the intervention. They must also have the capacity (sufficient staff, materials, or other resources) to perform the intervention.

Shah said developing policies about whether or how to intervene in specific ways in response to models affects health equity. When it comes to equity, Shah said, “Researchers spend too much time focusing on whether a model is equally accurate for everyone, And not enough time is spent focusing on whether the intervention will benefit everyone equally – even though most of the inequities we try to address arise from the latter. ”

For example, predicting which patients will not show up for their appointments may not be unfair in itself if its predictions are equally accurate for all racial and ethnic groups, but the choice of how to intervene—whether Duplicate appointment times or providing transportation support to help people get to their appointments — may have different impacts on different groups of people.

Third, the benefits of the intervention outweigh the harms, Shah said , any intervention can have both positive and negative consequences. Therefore, the usefulness of a model prediction will depend on the pros and cons of the intervention it triggers.

To understand this interaction, consider a commonly used predictive model: the atherosclerotic cardiovascular disease (ASCVD) risk equation, which relies on nine major data points including age, sex, race, total cholesterol , LDL/HDL cholesterol, blood pressure, smoking history, diabetes status, and use of antihypertensive medications) to calculate a patient's 10-year risk of heart attack or stroke. A fleshed-out usefulness analysis of the ASCVD risk equation would consider the three parts of the figure above and find it useful, Shah said.

First, the model is widely considered to be highly predictive of heart disease, and is also fair, generalizable, and interpretable. Second, most medical institutions intervene by following standard policies regarding risk levels in prescribing statins and have sufficient capacity to intervene because statins are widely available. Finally, a harm/benefit analysis of statin use suggests that most people benefit from statins, although some patients cannot tolerate their side effects.

An example of model usefulness analysis: Advanced Care Planning

The ASCVD example above, while illustrative, is probably one of the simplest predictive models. But predictive models have the potential to trigger interventions that disrupt healthcare workflows in more complex ways, and the benefits and harms of some interventions may be less clear.

To address this issue, Shah and colleagues developed a framework to test whether predictive models are useful in practice. They demonstrated the framework using a model that triggers an intervention called an advanced care plan (ACP).

ACP is typically provided to patients who are nearing the end of their life and involves an open and honest discussion of possible future scenarios and the patient’s wishes should they become incapacitated. Not only do these conversations give patients a sense of control over their lives, they also reduce health care costs, improve physician morale, and sometimes even improve patient survival rates.

Shah’s team at Stanford developed a model that can predict which hospital patients are likely to die in the next 12 months. Our goal: to identify patients who may benefit from ACP. After ensuring that the model predicted mortality well and was fair, interpretable and reliable, the team conducted two additional analyzes to determine whether the interventions triggered by the model were useful.

The first is a cost-benefit analysis, which found that a successful intervention (providing ACP to patients correctly identified by the model as likely to benefit) would save approximately $8,400, while providing the intervention to those who did not need ACP (i.e., model error ) will cost approximately $3,300. “In this case, very roughly speaking, even if we were only a third right, we would break even,” Shah said.

But the analysis did not stop there. “To save those $8,400 that was promised, we actually had to implement a workflow that involved, say, 21 steps, three people and seven handoffs in 48 hours,” Shah said. "So, in real life, can we do that?"

To answer this question, the team simulated the intervention over 500 hospital days to assess care delivery factors such as limited staff or lack of time. How will the benefit of the intervention be affected (due to patient discharge). They also quantified the relative benefits of increasing inpatient staffing versus providing ACP on an outpatient basis. Results: Having an outpatient option ensures more expected benefits are realized. “We only had to follow up with half of the discharged patients to get 75 percent efficacy, which is pretty good,” Shah said.

This work shows that even if you have a very good model and a very good intervention, a model is only useful if you also have the ability to deliver the intervention, Shah said. While hindsight may make this result seem intuitive, Shah said that was not the case at the time. "Had we not completed this study, Stanford Hospital might have just expanded its inpatient capacity to offer ACP, even though it was not very cost-effective."

Shah's team used to analyze the models, interventions, and interventions A framework of interactions between pros and cons can help identify predictive models that are useful in practice. "At a minimum, modelers should conduct some kind of analysis to determine whether their models suggest useful interventions," Shah said. "This will be a start."

The above is the detailed content of How do we ensure healthcare AI is useful?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete