• Light
  • Dark
  • Auto
Select Page

Communication between radiologists and patients will take centre stage at ECR, which will be held in July of this year, and many think AI is an opportunity to improve this relationship. Patient experience with algorithms used in radiology has been positive, but issues regarding patient data privacy must still be made clear.

Improving reporting

There is a lot of potential for the improvement of the patient-radiologist communication and AI could help make big strides, especially in reporting, according to Eldad Elnekave, an interventional radiologist at Rabin Medical Centre in Petah Tikva, Israel and Chief Medical Officer of Zebra Medical Vision.

“It’s one of the low hanging fruits in AI: taking the report and translating it in a way that the patient can understand. In Israel, a lot of people get their radiology reports directly sent to them before their referring physician and get distressed as a result. Text would be the place to start for AI to improve communication between the radiologist and the patient,” he said.

Having a structured report would make radiology findings more understandable to patients. New AI tools are being developed that will help to improve patient care and radiologists’ daily workflow, such as standardised language, common data elements, macros and templates.

Wende Gibbs, a neuroradiologist at the Mayo Clinic in Phoenix, Arizona, US, will share related experiences and solutions in a dedicated Professional Challenges session at ECR 2020.

“Radiologists increasingly recognise that the language used in the traditional prose report is variable and confusing – characteristics that hinder comprehension for the ordering physician, the patient and even other radiologists who perform the follow-up examinations,” she said.

AI has become natural in healthcare

The current trend is towards integrating AI into every aspect of life, including radiology. Patients expect their images to be processed through AI, just as they would when buying a new car, Elnekave explained.

“People know a car made in 2020 uses visual recognition software. They would also expect that kind of advance in a hospital. When patients know that AI is available and helps radiologists to detect things that could be missed, they will welcome it wholeheartedly,” he said.

Erik Briers, science writer, consultant and liaison officer of the European Prostate Cancer Coalition (Europa Uomo) in Hasselt, Belgium, also shares this opinion.

“I have no problems with AI being used in radiology. It’s a tool and radiologists already use quite a few in the back office – for example, the software that analyses the images. AI is just adding something to that,” he said.

AI is just the kind of tool that helps to expedite diagnosis and efficiency. “It’s an amazing tool. It has special properties and offers new opportunities to improve medical imaging,” Briers said.

AI will boost the radiology service, but on the condition that it is fed excellent diagnostic pictures and that the responsibility remains in the hands of the radiologist, both Briers and Elnekave agreed.

Any medical algorithm must meet very strict requirements. The tight regulation around AI and medical devices should be a quality assurance for patients that healthcare professionals remain in control, Elnekave believes.

“Medicine is probably the last place you will see AI take a part of the job because regulation is so stringent and the culture is relatively conservative,” he said.

In the end, patients know AI is here to make life better and increase efficiency. “People are waiting for days to get their imaging results now, but with AI they could get them much faster,” he added.

Good patient feedback

In 2019, the Fracture Prevention Service at Oxford University Hospitals became the first unit in the UK to use AI to identify early signs of osteoporosis, for which less than 20% of patients are diagnosed prior to fracture. Every CT scan that is carried out at the hospital for any purpose is now being run through Zebra Medical’s algorithm to help detect the disease as early as possible.

The algorithm, which was also used in a recently published study in Nature (1), was run three to six months after the examination, and patients were recalled whenever early signs of fracture were picked up on a scan. Feedback has been largely positive, Elnekave reported.

“Patients really appreciated the callback, even months later; even if they had initially been screened for something else,” he said.

One Oxford patient who was detected by the technology after being examined for shortness of breath reportedly said she regarded this opportunistic screening as “a gift” (2).

“I don’t get the feeling that patients were afraid of having an algorithm run their studies. We’ve been trying to show value and I haven’t seen or heard of any patients being negative,” Elnekave said.

What matters to patients is the result, not the technology used to achieve it, Briers believes.

“I don’t know if AI has helped treat my images, and I don’t know if I should. I’m not aware of the basic software used for my images. In MRI, there’s a huge amount of software that doesn’t even look like an image. I’m not interested in that. What I want is the best possible diagnosis with the highest possible precision. If radiologists get that using AI, then do so, and well done!” he said.

Patients would be very enthusiastic if AI could improve follow-up of a lesion over a large period of time, he added.

“If you have a benign bone tumour, it may evolve very slowly over decades. The patient has a new scan every year, but you can never place the knee exactly the same way as last time. AI may take the picture and reposition it artificially so that it matches the original’s and enables you to spot any change more precisely,” he explained.

An algorithm could easily crunch a patient’s images over the past 30 years and come up with extremely precise findings regarding lesion changes. “Radiologists should explore that area,” he concluded.

Data privacy

When training algorithms, radiologists should handle patient data carefully. Patients should be asked for their approval prior to their data being used, Briers insisted.

“We are the owners of our data, so we also own our images. It must be standard. Today, you have to accept almost everything and sign consent forms, so it should be the same for images,” he said.

Images must be anonymised before they are used to train algorithms so that they can’t be linked to anyone in particular. De-identification is the norm and consists of erasing revealing data from the DICOM header and images; for example, data on jewellery, implant IDs or burned-in labels. The patient and provider’s names in the report must also be deleted in the process.

However, reports have emerged that millions of images are already exposed online, as many medical offices disregard best practices in security and connect their PACS server directly to the Internet without a password (3) (4).

Patients are not against sharing their data to advance science and help train an algorithm, but consequences of privacy breach must be clear to everyone, Briers settled. “Seriously ill patients are usually very willing to donate their data to advance research and help future generations. But researchers must remember that patient data must be treated with caution. There should be huge penalties for disclosing patient information,” he said.

Privacy issues are a bit confusing when it comes to imaging data, but as a rule of thumb, the benefit outweighs the risk, Elnekave nuanced. “Generally, people who have been holding on tight to data have stayed behind,” he concluded.


Latest posts