• Light
  • Dark
  • Auto
Select Page

In his outstanding keynote lecture at this year’s EuSoMII Annual Meeting, Tim Leiner issued a strong call to action. In order to bring artificial intelligence (AI) into clinical routine, three main challenges must be overcome:

    1. Large datasets are needed to validate algorithm performance in real life.
    2. These datasets should be used to compare different algorithms under real-life circumstances.
    3. Quality standards beyond CE mark and FDA 510k are needed to judge an algorithm’s performance.

The relevance of these challenges can clearly be seen. Despite the enormous amount of venture capital that flows into healthcare AI startups and companies, there is a lack of evidence that any hospital is currently using AI in real-world applications [1]. As Tim Leiner explained, this is due to obstacles that are currently not being targeted enough. Among others, the current lack of valid evidence on an algorithm’s performance and the unsolved question of how liability is handled in case of false results is perceived as a major hurdle to implementing AI in clinical routine. Furthermore, the “black-box” nature of the algorithms leads to a lack of trust from the physicians. Leiner went on to explain that some of these challenges can be overcome by laying the technical foundations to allow for the integration of AI algorithms in a vendor-neutral way, so that experience can be collected and clinical validation studies can be performed with greater ease.

At UMC Utrecht, the Department of Radiology was able to develop a generic infrastructure (IMAGR [2]) that enables the integration of AI algorithms from different vendors and makes them accessible on all clinical reporting workstations. Their impressive work mainly used freely available open-source software and offers the user a single portal front-end which then allows for any imaging study to be sent to any AI algorithm and directly visualize its results.

Such technical solutions could open up possibilities to address the main challenges mentioned above. If we as physicians do not actively participate in rigorously vetting AI algorithms, not only will we miss the opportunity to guide developments in this field, but we will also ultimately remain liable for the algorithms’ errors, Tim Leiner explained. For the radiologist to not be liable for the algorithms’ incorrect results, such algorithms need to be accepted as the standard of care, which, understandably, can only be the case if their performance was sufficiently assessed under real-life conditions and evidence of added value has been shown.

It will require quite a bit of effort from the radiological community, not only by individual researchers but also societies, like the ESR and EuSoMII, to advocate for such vendor-neutral approaches to AI integration and the development of databases. However, as Tim Leiner showed, these efforts will certainly payout and help to shape the future of our profession.



Latest posts