The article below is a continuation of our interview with Professor Charles Kahn on the changing landscape of radiology when it comes to artificial intelligence (AI). In our previous article, Kahn discussed his thoughts on publishing in the field of AI, structuring articles, and data sharing. We continue now as our conversation changes course to the future of AI and how this will impact the field of radiology and the radiologist.
A common fear among workers seen throughout history, especially during the Industrial Revolution, has been the idea that we are being replaced by machines and technology. This fear was present from the 19th century when factories transitioned to machine labor to the 20th century when more advanced technology like computers came onto the scene, and it persists to the present day. With artificial intelligence (AI) becoming such a ubiquitous concept, it is only logical that this fear is also found in the healthcare industry, particularly among the medical field that has received most of the attention when it comes to AI: Radiology. However, Professor Charles Kahn believes that these fears are more irrational than rational.
Radiologists and AI: a complementary relationship
According to Professor Charles Kahn, professor and vice chairman of radiology at the University of Pennsylvania and Editor of Radiology: Artificial Intelligence, AI should not be seen as a replacement of the physician, but as a complement to the physician. “AI provides remarkable opportunities for us to improve the way we deliver care to our patients and to make what we do more valuable to them.”
Even if AI tools could perform the entire job of a radiologist, this scenario begs the question. How would you like to receive a diagnosis from, or have an important conversation about your health with, a computer program? According to Kahn, there has to be a physician involved in this process somewhere. He goes on to say that radiologists should embrace AI, in that a radiologist who takes advantage of these tools can achieve more than either one of these working alone. Kahn states, “To evaluate head CTs, for example, looking for stroke, it is very important to do that rapidly. If a patient comes in with acute symptoms of stroke, you want to be able to come to a conclusion quickly. AI systems can provide consistent performance at all hours of the day and night and can help prioritize for human review those exams of greatest concern. In that case, a machine is very effective in terms of being able to provide information that’s useful.”
However, there is another side to the efficiency and speed of a machine. “There are always unanticipated circumstances; there are conditions that people didn’t program for. For example, you’re doing a head CT but the patient has a skull malformation, or maybe they’ve had surgery and they’ve had part of the bone removed, and the AI system has never seen that before”, Kahn states. Issues like this can occur when using technology, which is why, at this point in time, there is no complete substitute for a human brain and intuition.
A similar example may be that of self-driving cars. Kahn discusses how routine actions, such as driving on a highway, are where self-driving cars can really shine; basic functions such as reading traffic flow, changing lanes, and overtaking slower cars can be quite simple. However, when thrown into a city in somewhere like Vietnam or France, where trams share the street with cars, people are riding bicycles, and the occasional pedestrian is running out to cross the street in areas with no crosswalk or traffic light, these programmed functions may not be as efficient as having a human being behind the wheel. “I think humans do very well at solving problems that are one-off. And AI systems do very well at solving problems that they’ve seen a thousand times before.”
“Because that’s where the money is”
Due to AI tools and technology becoming more omnipresent in radiology and healthcare, issues regarding liability are also making their way to the forefront of the conversation: Who is at fault for a misdiagnosis? If an AI system makes a mistake, who gets sued? Who is responsible for misreading this or that scan? To address this question of liability, Kahn brought up the example of “Sutton’s Law”: When notorious American bank robber Willie Sutton was asked by a reporter why he robbed banks, he famously answered, “Because that’s where the money is”. The use of AI opens a variety of complex issues about who bears responsibility, and also about who can be sued most effectively. Moreover, one has to consider the legal systems across various countries; policies or laws that may protect or implicate a physician in a country operating within the European Union may not be the same for a physician in the United States.
Though it contradicts the ideas we’ve been taught in movies and literature about a dystopian world run by robots, Kahn has a more positive outlook for the near future: he believes strongly that physicians working with AI will provide the best possible care for their patients. Kahn says, “Despite what some have said about AI obviating the need for radiologists, patients and AI developers both need radiologists more than ever. We have a critical role to play in helping to build AI systems and in assuring that those systems are safe, effective, and focused on clinically important problems. AI systems are machines; it’s up to us as physicians to determine how we can best use them to serve our patients.”