Sepsis prediction tool used by hospitals misses many cases, study says. Firm that developed the tool disputes those findings.

By Erin Blakemore,

When it comes to sepsis, the body’s often-deadly response to out-of-control infections, time is of the essence. According to the Centers for Disease Control and Prevention, 1 in 3 patients who die in a hospital had sepsis.

Treating the life-threatening condition can be tricky, however, and it isn’t always clear whether a person has the deadly condition.

Hospitals increasingly rely on predictive tools to help them recognize sepsis cases. But according to a new study in JAMA Internal Medicine, a sepsis prediction tool used by hundreds of U.S. hospitals may miss many cases.

Epic Systems, which developed the tool, disputed those findings.

“The authors used a hypothetical approach,” Epic said in a statement. “They did not take into account the analysis and required tuning that needs to occur prior to real world deployment to get optimal results.”

The study used retrospective data to examine the widely used tool, which according to Epic has an 80 percent accuracy rate. The company says more than 250 million patients in the United States have a current medical record in its system, and its products are used in hospitals around the country.

Epic developed its mathematical model based on data from over 400,000 patient encounters, but researchers at University of Michigan Medical School at Ann Arbor wondered how the tool performed in real life.

The researchers used hospital data from over 27,000 patients, all of whom had been evaluated for sepsis using the Epic Sepsis Model. But they found that although the model generated predictive scores every 15 minutes, the scores didn’t necessarily match up with the patient’s condition. The model generated alerts on 1 in 5 patients, but the study suggests it did not identify two-thirds of the sepsis cases, and only identified 7 percent of patients whose sepsis had been missed by a clinician.

The researchers say the disconnect lies in the way the model was created. They say developers relied on billing data that might not reflect accurate sepsis diagnoses and used the time a doctor intervened — such as ordering tests or prescribing antibiotics — to define the beginning of sepsis. But doctors often don’t recognize sepsis, the researchers say.

“In essence, they developed the model to predict sepsis that was recognized by clinicians at the time it was recognized by clinicians,” said Karandeep Singh, assistant professor of learning health sciences and internal medicine at Michigan Medicine, in a news release. “However, we know that clinicians miss sepsis.”

He says the study shows the need for more oversight and open-source models that can be validated more easily.

Read more

View original article here Source

Related Posts