Humans fare better than AI in reading chest X-rays

A recent study claims that while AI technologies may increase radiologists’ confidence in their diagnoses, they cannot be relied upon to detect prevalent lung ailments on chest X-rays.
In a study of more than 2,000 X-rays, researchers pitted 72 radiologists against four widely available AI systems. According to findings published on September 25 in Radiology, the human specialists triumphed.

Lead researcher Dr. Louis Plesner, resident radiologist and PhD fellow in radiology at Herlev and Gentofte Hospital in Copenhagen, Denmark, said: “Chest radiography is a common diagnostic tool, but significant training and experience are required to interpret exams correctly.”

“While AI tools are becoming more and more approved for use in radiological departments, there is an unmet need to further test them in real-life clinical scenarios,” Plesner stated in a journal news release. While AI techniques can help radiologists analyze chest X-rays, their actual diagnostic efficacy is still unknown.

Radiologists can be helped by commercially available and FDA-approved AI systems, according to Plesner.

The X-rays used in this investigation were collected over a two-year period at four Danish institutions. A target diagnosis was present in about one-third of patients.

The X-rays were examined for three typical findings: pleural effusion, a collection of fluid around the lungs; pneumothorax, or collapsed lung; and airspace disease, a chest X-ray pattern that can be brought on by pneumonia or lung edema.

AI methods had sensitivity rates for pneumothorax, airspace disease, and pleural effusion that ranged from 72% to 91%, 63% to 90%, and 62% to 95%, respectively. Less disease is missed when a test is extremely sensitive.

The study discovered that radiologists fared better than AI at correctly determining whether or not the three major lung illnesses were present.

“The AI tools showed moderate to a high sensitivity comparable to radiologists for detecting airspace disease, pneumothorax, and pleural effusion on chest X-rays,” Plesner stated. However, compared to radiologists, they “produced more false-positive results [predicting disease when none was present], and their performance declined for smaller targets and when multiple findings were present.”

AI methods had sensitivity rates for pneumothorax, airspace disease, and pleural effusion that ranged from 72% to 91%, 63% to 90%, and 62% to 95%, respectively. Less disease is missed when a test is extremely sensitive.

The study discovered that radiologists fared better than AI at correctly determining whether or not the three major lung illnesses were present.
“The AI tools showed moderate to a high sensitivity comparable to radiologists for detecting airspace disease, pneumothorax, and pleural effusion on chest X-rays,” Plesner stated. However, compared to radiologists, they “produced more false-positive results [predicting disease when none was present], and their performance declined for smaller targets and when multiple findings were present.”

According to him, “AI systems appear to be quite good at detecting disease, but they fall short of radiologists when it comes to detecting the absence of disease, particularly when the chest X-rays are complex. “Too many false-positive diagnoses would result in unnecessary imaging, radiation exposure, and increased costs.”

Previous studies that asserted AI was superior to radiologists simply allowed the radiologists to assess the image, without having access to the patient’s clinical history or earlier imaging investigations. In actual reality, a radiologist will combine these three pieces of information to interpret an imaging test, according to Plesner.

 

Leave a Reply

Your email address will not be published. Required fields are marked *