Researchers at Stanford University in California have developed a new artificial intelligence tool that can identify areas of a brain scan that are likely to contain aneurysms.
In a paper published on 7 June in JAMA Network Open, researchers described how the tool, which was built using an algorithm called HeadXNet, boosted their ability to locate aneurysms, in blood vessels in the brain that can leak or burst open, potentially leading to strokes, brain damage and death.
Researchers were able to find six more aneurysms in 100 scans that contain aneurysms when using the tool and it “also improved consensus among the interpreting clinicians”.
While the success of HeadXNet in these experiments is promising, the team of researchers cautioned that “further investigation is needed to evaluate generalizability of the AI tool prior to real-time clinical deployment given differences in scanner hardware and imaging protocols across different hospital centers”. They plan to address such problems through “multi-center collaboration”.
Combing brain scans for signs of an aneurysm can mean scrolling through hundreds of images. Aneurysms come in many sizes and shapes and balloon out at tricky angles – some register as no more than a blip within the movie-like succession of images.
“There’s been a lot of concern about how machine learning will actually work within the medical field,” Allison Park, a Stanford graduate student in statistics and co-lead author of the paper, said. “This research is an example of how humans stay involved in the diagnostic process, aided by an artificial intelligence tool.”
“Search for an aneurysm is one of the most labor-intensive and critical tasks radiologists undertake,” Kristen Yeom, associate professor of radiology and co-senior author of the paper, added. “Given inherent challenges of complex neurovascular anatomy and potential fatal outcome of a missed aneurysm, it prompted me to apply advances in computer science and vision to neuroimaging.”
Yeom brought the idea to the AI for Healthcare Bootcamp run by Stanford’s Machine Learning Group, which is led by Andrew Ng, adjunct professor of computer science and co-senior author of the paper. The central challenge was to create an AI tool that was able to accurately process large stacks of three dimensional images and “complement diagnostic practice.
To train their algorithm, Yeom worked with Park and Christopher Chute, a graduate student in computer science, and outlined clinically significant aneurysms detectable on 611 computerized tomography (CT) angiogram head scans.
“We labelled, by hand, every voxel – the 3D equivalent to a pixel – with whether or not it was part of an aneurysm,” Chute, who is also co-lead author of the paper, said. “Building the training data was a pretty gruelling task and there were a lot of data.”
After the training, the algorithm decides for each voxel of a scan whether there is an aneurysm present, with the end result overlaid as a semi-transparent highlight on top of the scan, making it easy for clinicians to see what the scans look like without HeadXNet’s input.
“We were interested how these scans with AI-added overlays would improve the performance of clinicians,” Pranav Rajpurkar, a graduate student in computer science and co-lead author of the paper, said. “Rather than just having the algorithm say that a scan contained an aneurysm, we were able to bring the exact locations of the aneurysms to the clinician’s attention.”
HeadXNet was tested by eight clinicians by evaluating a set of 115 different brain scans for aneurysms, once with the help of HeadXNet and once without. With the tool, the clinicians correctly identified more aneurysms, and therefore reduced the “miss” rate, and the clinicians were more likely to agree with one another.
The researchers believe that the tool did not influence how long it took the clinicians to decide on a diagnosis or their ability to correctly identify scans without aneurysms – a guard against telling someone they have an aneurysm when they don’t.
The machine learning methods that form the core of HeadXNet could likely be trained to identify other diseases both inside and outside the brain, the researchers believe, but there is a “considerable hurdle” in integrating AI medical tools with daily clinical workflow in radiology across hospitals.
Current scan viewers aren’t designed to work with deep learning assistance, so the researchers had to custom-build tools to integrate HeadXNet within scan viewers. Furthermore, variations in real-world data – as opposed to the data on which the algorithm is tested and trained – could reduce model performance.
If the algorithm processes data from different kinds of scanners or imaging protocols, or a patient population that wasn’t part of its original training, it might not work as expected.
“Because of these issues, I think deployment will come faster not with pure AI automation, but instead with AI and radiologists collaborating,” Ng said. “We still have technical and non-technical work to do, but we as a community will get there and AI-radiologist collaboration is the most promising path.”