AI could soon help with ER admissions, study finds
Doctors working in emergency departments might soon be able to use artificial intelligence to help determine which patients should be admitted to the hospital, according to a new study published Tuesday in the Journal of American Medical Informatics Association.
As part of the study, researchers used OpenAI’s GPT-4 to analyze medical records from more than 864,000 emergency room visits across seven Mount Sinai hospitals.
Researchers then tested the program to see how accurately the software could predict when those visits required admission to the hospital.
About 18 percent of those visits—or 159,857—resulted in a person being admitted to the hospital.
The large language model had an accuracy rate of about 83 percent which, while not perfect, means that the technology could soon support doctors in emergency rooms, according to researchers.
“Getting 100 percent accuracy is going to be difficult because of the random nature of some of these things,” said Girish Nadkarni, a professor of medicine at the Icahn School of Medicine and senior author of the study.
“When you are talking about transferring someone to a hospital bed you might not need perfect accuracy because that decision can be reversible,” Nadkarni told The Hill.
The program could be useful for logistical and administrative tasks in emergency rooms that could ultimately help workflow, according to Nadkarni.
“Predicting which patients are unlikely to be admitted, will not just help individual physicians but also health systems, because you can decide how to optimize a particular system for efficiency,” Nadkarni added.
Doctors and hospital staff could theoretically use the technology to shorten patient wait time and more quickly figure out how many beds are needed in a hospital, which ER patients should be transferred to inpatient floors and, ultimately, who needs to be discharged.
Ajeet Singh, a hospitalist and informaticist at Rush University Medical College who did not participate in the study, stressed that large language models like GPT-4 does not understand language, like people do, and therefore has severe limits in how it can help in clinical settings.
Physicians consider numerous factors when deciding if a patient needs to be admitted to the emergency room and large language models have no real understanding of these factors or their relationship to each other, according to Singh.
Even if the technology is implemented in emergency rooms, doctors will still always need to perform their own independent evaluation to determine a patient’s treatment.
These programs do not understand the words they are trained on, like the ones in the medical records of this study, and merely mimic reasoning by predicting relationships of words to one another.
Singh said in a text message to The Hill it was “unfortunate” that that limitation was not mentioned in the study. “We cannot mistake performance gains in language models for progress in actual language understanding,” he said.
“We must be diligent in recognizing the limitations of these tools and fall for the hype in attributing perceived accuracy to actual understanding or a language model’s ability to discern truth,” Singh said. “If we fail to recognize the vulnerabilities of the tools we use, we will apply them in ways that expose our patients to greater harm.”
Copyright 2024 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
For the latest news, weather, sports, and streaming video, head to The Hill.