Picture a world where healthcare is not confined to a clinic.
The watch on your wrist ticks steadily throughout the day, collecting and transmitting information about your heart rate, oxygen saturation and the levels of sugar in your blood. Sensors scan your face and body, making inferences about your state of health.
By the time you see a doctor, algorithms have already synthesized this data and organized it in ways that fit a diagnosis, detecting health problems before symptoms arise.
We aren’t there yet, but, according to Harlan Krumholz, a professor of medicine at the School of Medicine, this could be the future of healthcare powered by artificial intelligence.
“This is an entirely historic juncture in the history of medicine,” Krumholz said. “What we’re going to be able to do in the next decades, compared to what we have been able to do, is going to be fundamentally different and much better.”
Over the past months, Yale researchers have published a variety of papers on machine learning in medicine, from wearable devices that can detect heart defects to algorithms that can triage COVID-19 patients. Though much of this technology is still in development, the rapid surge of AI innovation has prompted experts to consider how it will impact healthcare in the near future.
Questions remain about the reliability of AI conclusions, the ethics of using AI to treat patients and how this technology might transform the healthcare landscape.
Synergy: human and artificial intelligence at Yale
Two recent Yale studies highlight what the future of AI-assisted health care could look like.
In August, researchers at the School of Medicine developed an algorithm to diagnose aortic stenosis, a narrowing of a valve in the body’s largest blood vessel. Currently, diagnosis usually entails a preliminary screening by the patient’s primary care provider and then a visit to the radiologist, where the patient must undergo a diagnostic doppler exam.
The new Yale algorithm, however, can diagnose a patient from just an echocardiogram performed by a primary care doctor.
“We are at the cusp of doing transformative work in diagnosing a lot of conditions that otherwise we were missing in our clinical care,” said Dr. Rohan Khera, senior author of the study and clinical director of the Yale Center for Outcomes Research & Evaluation, CORE. “All this work is powered by patients and their data, and how we intend to use it is to give back to the most underserved communities. That’s our big focus area.”
The algorithm was also designed to be compatible with cheap and accessible handheld ultrasound machines, said lead author Evangelos Oikonomou, a clinical fellow at the School of Medicine. This would bring first-stage aortic stenosis testing to the community, instead of being limited to those that are referred to a skilled and potentially expensive radiologist. It could also allow the disease to be diagnosed before symptoms arise.
In a second study, researchers used AI to support physicians in hospitals by predicting COVID-19 outcomes for emergency room patients — all within 12 hours.
According to first author Georgia Charkoftaki, an associate research scientist at the Yale School of Public Health, hospitals often run out of beds during COVID-19 outbreaks. AI-powered predictions could help determine which patients need inpatient care and which patients can safely recover at home.
The algorithm is also designed to be adaptable to other diseases.
“When [Respiratory Syncytial Virus] babies come to the ICU, they are given the standard of care, but not all of them respond,” Charkoftaki said. “Some are intubated, others are out in a week. The symptoms [of RSV] are similar to COVID and so we are working on a study for clinical metabolomics there as well.”