About Us
AI tool reads brain MRIs in seconds, aiming to ease pressure on health systems
- #AI, MRI, University of Michigan
-
Illustrative image generated using artificial intelligence. This visual is for conceptual purposes only and does not represent actual patient data, medical scans, or clinical outcomes.
A new artificial intelligence system developed by researchers at the University of Michigan is showing how advanced computing could reshape the way brain scans are read, with the ability to analyze MRI images in just seconds and assist physicians facing growing imaging workloads.
The tool, called Prima, is an MRI-specific video language model designed to process images, video, and text simultaneously in real time.
By reviewing brain scans alongside clinical information, Prima mirrors how radiologists assess cases in practice, offering a broader and more integrated approach than earlier AI systems that were limited to detecting single diseases.
Researchers behind the project said the technology was developed in response to rising global demand for MRI scans, which continues to place pressure on health systems and specialists.
Todd Hollon, MD, a neurosurgeon at U-M Health and senior author of the study, said the model has the potential to reduce clinical burden by delivering fast and accurate diagnostic insights that can support timely treatment decisions.
Prima was trained using more than 200,000 brain MRI exams collected at the university over several decades, paired with patients’ medical histories and clinical indications.
The system was then tested on more than 30,000 brain studies conducted over a one-year period.
Unlike earlier AI tools, the model was designed to consider all available imaging and clinical data at once, allowing it to generate a more comprehensive understanding of each case.
According to co-first author Samir Harake, an MD candidate and data scientist, Prima integrates medical history and imaging data in a way that closely resembles a radiologist’s workflow.
This design enables stronger performance across a wide range of diagnostic and predictive tasks, rather than focusing on a narrow set of conditions.
Testing showed that the system could identify more than fifty neurological conditions, including stroke, brain tumors, and hemorrhage, with accuracy reaching as high as ninety-seven point five percent.
The model was also able to assess how urgently a patient required care and could automatically alert the appropriate specialist when critical findings were detected.
Hollon described the technology as comparable to a conversational AI for medical imaging, highlighting its potential impact in settings where specialist expertise may be limited. He noted that tools like Prima could be especially valuable in rural or resource-constrained areas, where rapid access to expert image interpretation is often a challenge.
A 2024 study titled “AI in radiology: From promise to practice – A guide to effective integration,” authored by Sanaz Katal, Benjamin York, and Ali Gholamrezanezhad, points out a critical gap in use of artificial intelligence in diagnostic radiology.
While AI shows strong promise, its real-world clinical adoption remains limited because many systems are unable to factor in clinical history or prior and concurrent imaging studies.
This shortcoming can lead to diagnostic inaccuracies that directly affect patient care. Most current AI models excel at narrow, binary tasks, such as identifying the presence or absence of hemorrhage or fractures, but are often trained on datasets that lack broader clinical context.
Without patient background information or comparison with earlier scans, image interpretation can become distorted.
The authors highlight emerging approaches designed to address these challenges, including multimodal data fusion, hybrid neural network architectures, and domain adaptation strategies.
SOURCE: Radiology Business
