Article In Brief
Some stroke neurologists say the benefits of artificial intelligence (AI) for detection and prognosis are relatively trivial at the large, well-staffed medical centers where it is most likely to be found, but others believe the greatest benefits will come with AI systems that can do much more than simply read a scan.
Treatment decisions for stroke used to follow a linear decision pathway. A patient who arrived by ambulance in the emergency department unable to move their left arm would have a CT scan taken. If the radiologist looked and found a large vessel occlusion (LVO), the interventionist on call would be summoned and perform a thrombectomy.
That was then. Now, in some major metropolitan areas, the dispatcher will summon a mobile stroke unit instead of an ordinary ambulance. The responding team will deploy a CT scanner in the back of its vehicle. The scan will be automatically uploaded to an artificial intelligence (AI) program that detects signs of an LVO. Before a radiologist has even looked at the scan, the AI system will send an alert with the CT image to the cell phones of the intervention team. The team will assemble so that by the time the mobile unit arrives at the hospital, the procedure room is prepped, and the interventionist is ready to restore blood flow to the site of the clot.
Now consider the future: Next-generation AI analyzes not only the CT image but also the patients' complete medical history. It calculates whether the patient can survive a procedure, whether tissue in the center of the thrombus is salvageable, and the patient's 90-day prognosis. Its estimates come with an explanatory text to help the physician and family understand the AI's reasoning.
Will AI Replace Physicians?
In just five years, the arrival of commercial AI programs has transformed the process of detecting and treating a stroke at most major medical centers. However, many smaller hospitals, where AI triaging could have the greatest benefit, have yet to implement the new technology, according to neurologists, radiologists, and neurosurgeons who spoke with Neurology Today. But as ever more powerful AI systems are investigated, the inevitable question arises: Are physicians who specialize in the diagnosis and treatment of stroke still going to have a job in 10 years?
“As powerful as AI is, I don't think it can ever replace the human expert,” said Achala Vagal, MD, MS, professor of radiology and vice chair of the department of radiology at the University of Cincinnati Medical Center. “We cannot take AI as gospel. I call it augmented intelligence: it's one more tool to help us help our patients.”
But it can be a powerful tool. Johanna T. Fifi, MD, FAAN, a neurologist and associate director of the cerebrovascular center at Mount Sinai in New York City, told Neurology Today that AI has been “revolutionary” for her health system.
“We have hospitals in our system that might not have a stroke team in-house on nights or weekends,” Dr. Fifi said. “In previous years, it could take hours for an outside hospital to call us and say they had an LVO. The patient could have been sitting in the emergency department for three or four hours by then, waiting for the radiologist to read the scan. Now we've gotten so used to having everyone on call alerted by their phone. Everyone is notified at once, so we can work in parallel. All the steps are shorter, so it decreases the time to treatment.”
As welcomed and widespread as AI has become, some say its benefits are relatively trivial at the large, well-staffed medical centers where they are most likely to be found. Others say the greatest benefits will come with AI systems now being developed to do much more than simply read a scan.
Early AI Development for Stroke
One of the first automated image-processing platforms was developed approximately 15 years ago by Stanford neurologist Gregory W. Albers, MD, and radiologist Roland Bammer, PhD, now at Monash University. They developed an algorithm, now called Rapid, to identify salvageable tissue and core (dead) tissue. With the MRI software installed in eight hospitals in the United States and one European center, they reported results of a prospective cohort study in Lancet Neurology in 2012, showing that prompt reperfusion in patients with salvageable tissue had favorable clinical outcomes, while those without salvageable tissue did not.
In 2020, the US Food and Drug Administration (FDA) approved the Rapid ASPECTS software to automatically deliver standardized results for predictive assessment of thrombectomy eligibility. Within minutes, the program delivers results via email and cell phone alerts to radiologists' picture archiving and communication systems (PACS) and all members of an interventional team. Over 1,700 sites across the United States and more than 2,300 globally now use Rapid ASPECTS, according to a company spokesperson.
More than 1,300 hospitals, including most of the 50 largest health care systems in the United States, use another widely adopted platform, Viz.ai, according to a company spokesperson. The FDA has cleared nine of the company's algorithms, including those for LVO and cerebral aneurysm.
Two other widely used AI systems for triaging possible LVO, hemorrhages, aneurysm, and other conditions include Aidoc and Avicenna.AI's CINA. None are cleared by the FDA for detection, but they are used for alerting members of the neurovascular team via smartphone or tablet when it has identified a person with a potential stroke.
James C. Grotta, MD, FAAN, director of the mobile stroke unit and director of stroke research at the Clinical Institute for Research and Innovation at Memorial Hermann-Texas Medical Center, said his hospital began using the Rapid system at least eight years ago.
“The original use was in detecting mismatch, where you measure the amount of blood flow reduction against the core of the infarct and look for a mismatch between the two,” Dr. Grotta said. “Once you identify a patient with an LVO, you want to know if they have salvageable tissue.”
Before they began using the AI system, he said, the initial CT scans were seen only by the physician on call, who then had to contact other members of the intervention team.
“The beauty of our system now is I can be in the grocery store and get an alert,” Dr. Grotta said. “I open my phone and the AI has done the work for me. I know I need to leave the store and go to the hospital.”
Studies Document Benefits
Many studies published in the past 10 years have examined the effects of AI for LVOs and other neurologic emergencies. In 2021, for instance, Dr. Fifi was the senior author of a paper in the journal Cerebrovascular Diseases describing Mount Sinai's real-world experience with AI-based triage in patients transferred to a comprehensive stroke unit for LVO. It found that the median initial door-to-neuroendovascular team notification time interval was significantly faster, with less variation, following implementation of VizLVO. The median initial door-to-skin picture time interval was 25 minutes shorter in the post-Viz cohort, although this finding was not statistically significant (p=0.15).
Dr. Fifi also coauthored a paper in February in Cerebrovascular Diseases Extra, which documented how the Viz software had significantly decreased all workflow metrics for patients with vessel occlusion who had been transferred.
One of the first studies to compare two commercial AI systems head-to-head was published in 2022. Senior author Jennifer Soun, MD, assistant professor of radiological sciences at the University of California, Irvine, found that Rapid LVO demonstrated an accuracy of 0.86, sensitivity of 0.90, and specificity of 0.86. CINA, on the other hand, demonstrated an accuracy of 0.96, sensitivity of 0.76, and specificity of 0.98.
“It can be hard to compare tools, because usually institutions purchase just one of these tools,” Dr. Soun said. “We had two, so we were able to see the strengths and limitations of these tools in a real-world setting.”
Beyond the commercially available systems, some academic researchers have developed their own AI systems. In 2022, Nature Communications published a paper by Korean researchers describing what they called an anomaly detection algorithm that had been trained on CT scans of healthy individuals. It found that the median radiology turnaround time for diagnosis of a brain lesion was nearly five minutes faster with the AI system than without it.
Cautions and Concerns
While AI is reasonably accurate in diagnosing a stroke, it would be a mistake to rely on its judgment exclusively, said Elad Levy, MD, professor of neurosurgery and radiology, and chair of the department of neurological surgery at the Jacobs School of Medicine and Biological Sciences at the University of Buffalo.
“It can make the diagnosis of stroke, but it doesn't get you to the point of saying whether you should or should not treat this patient,” Dr. Levy said. “We still need to be a doctor and put the patient, the physiology and the anatomy together to make a good decision. I'm hoping the AI systems can start communicating with the electronic medical record, the lab reports, the physical exam, so we're not just treating a CT scan, but we're treating the patient.”
In a paper published last year in Radiology: Artificial Intelligence, Dr. Vagal said AI had caused a paradigm shift for radiologists as they respond to a “code stroke.”
“Currently, decisions regarding triage and treatment using imaging data are being made in real time with or without a radiologist's input,” she wrote. “More often, the participants in the online communication are the stroke neurologist, emergency physician, neurosurgeon, neurointerventionalist, and stroke nurse coordinator—all discussing the imaging findings and the implications for patient care.”
Dr. Vagal told Neurology Today, however, that radiologists are the experts who protocol the scans. “For me, the paradigm shift is that we have our desktop PACS workstations for viewing images, but the decision-making and communication about triaging the patient is happening on a different platform, on the phone,” she said. “We radiologists need to stay actively involved with the rest of the stroke team in interpreting and confirming what the AI is saying.”
The Future of AI
Greg Zaharchuk, MD PhD, professor of radiology, neuroimaging, and neurointervention at Stanford University, said he views the currently available AI systems largely as incrementally improved post-processing software; the critical value added, he said, is the inclusion of a cell phone-based communication system for team coordination.
“Artificial intelligence is a vague term; you need to qualify what you're talking about,” he said. “To me, the real advances in the past five years have been with the use of deep learning and convolutional neural networks. Many of these LVO and stroke detection tools use these in a very limited manner if at all.”
“Using convolutional neural networks to extract relevant features enables you to combine imaging and clinical information in an optimal way, rather than guessing which features are important,” Dr. Zaharchuk added. “This is where the field is going. It will be a sea change in how we integrate imaging and clinical data.”
Dr. Zaharchuk's research has focused on AI-based image-to-image translation, in which AI can be used to take in a poor-quality image and predict a high-quality image in response. This will allow images to be acquired faster, he said, with benefits to both patients and imaging facilities, and it is particularly relevant to stroke imaging. He started a company in 2017 called Subtle Medical whose technology is now used on over a million patients per year, he said.
“The exciting thing about predicting images,” Dr. Zaharchuk said, “is you can also use it to predict future images or future outcomes. From all the stroke trial data, we can train a model at time zero, when someone comes in the door, and use AI to predict outcomes based on different treatment strategies in a very data-driven way. We're doing studies now predicting both tissue outcomes in the early post-treatment time period as well as 90-day clinical outcomes.”
Another step into the future would be placing the AI into a roving robot.
“Right now, when my fellows and I are doing rounds, we might have to stop and look up some information about a patient's disease or review their chart,” Dr. Grotta said. “A robot making rounds with us might be able to do that automatically, to spit out a differential diagnosis and treatment plan. I'm not saying it's going to replace the physician, but it could be a useful part of the care team.”
Dr. Achala Vagal is a consultant for Viz AI and has received consulting fees. Dr. Vagal leads the imaging core lab for the ENDOLOW clinical trial, and the University of Cincinnati radiology department has received funding for the research grant. Dr. Soun has received research support from Canon Medical.