Settings
  • Light
  • Dark
  • Auto
Select Page

Artificial intelligence (AI) has certainly made impressive progress over the past few years. Consequently, many companies have entered the market bombarding radiologists with diagnostic AI tools aiming to help them in their clinical practice. Navigating through this plethora of AI product offerings can be quite challenging at times. How do you know if the marketing claims could hold up in clinical practice? And what are important questions to ask vendors?

The authors of a recently published manuscript provide important guidance on which questions should be addressed before making a buy or no-buy decision. In their “evaluating commercial AI solutions in radiology” (ECLAIR) guidelines, the authors highlight the ten most important questions to ask vendors:

1. What problem is the application intended to solve, and who is the application designed for?
This might seem obvious, but it is important to clearly establish which specific clinical need the AI tool is expected to solve and how the tool is intended to act. The application could be, for instance, used as a double reader, triage tool or other, and it may generate different clinical outputs, such as diagnosis, prognosis, or quantification.

2. What are the potential benefits and risks, and for whom?
The relevant outcome measures may vary, depending on who should benefit from the usage of such tools. Diagnostic tools should most likely help improve the clinical outcome of patients in a certain clinical scenario; more workflow-oriented tools may help with billing or productivity.

3. Has the algorithm been rigorously and independently validated?
In most cases, it should be expected that AI tools have been thoroughly evaluated with regards to their performance. These studies may, however, reveal some biases. For example, some specific patient groups might have not been included, or certain scanner vendors underrepresented. In general, specific recommendations apply for the evaluation of AI algorithms and existing guidelines should be followed.

4. How can the application be integrated into your clinical workflow and is the solution interoperable with your existing software?
Eventually, all good ideas will have to face clinical and institutional reality. So, these questions should be considered early on. Even the best AI tool will not be used if it cannot be integrated seamlessly into the existing clinical workflows.

5. What are the IT infrastructure requirements?
It is always a good idea to consult with your IT department as early as possible. They will help you assess interoperability and guide you through issues that may hinder the deployment of the tool, such as network security issues.

6. Does the application conform to the medical device and the personal data protection regulations of the target country, and what class of regulation does it conform to?
Again, this might sound boring but can have huge implications if not addressed early on. In Europe, all medical devices, which AI tools usually are, need CE certification and in the future will need to adapt to the upcoming Medical Device Regulation.

7. Have return on investment (RoI) analyses been performed?
It’s easy to get caught up in enthusiasm for an AI tool, but in the end, the tool will have to be paid for, and unfortunately, healthcare is rarely characterized by an abundance of resources and money. At best, RoI analyses should be formally performed to ensure that the solution is locally viable.

8. How is the maintenance of the product ensured?
Like every system, AI software may suffer a malfunction. Other times, there will be changes to the IT infrastructure of an institution over time. Therefore, support from the AI vendor will be needed. And since this could come with a price tag, it should be discussed beforehand. In general, the sustainability of the product over time, including in germs of cost, is important to consider.

9. How are user training and follow-up handled?
There may be AI tools that are seamlessly integrated into existing workflows and do not need specific user training. Others may take time to familiarize themselves, in which case dedicated user training will certainly be helpful. In any case, there should ideally be a way to contact the vendor if a question comes up.

10. How will potential malfunctions or erroneous results be handled?+
The AI tool will be wrong from time to time, so having a way to discuss and fix bugs and errors is certainly desirable. Post-market surveillance will be relevant to AI tools, just as it is for drugs and other products.

These are just some of the most important questions to consider when deciding on an AI tool. Additional points, as well as some specific explanations to those questions, can be found in “To buy or not to buy—evaluating commercial AI solutions in radiology (the ECLAIR guidelines)”.

Key points

  • Numerous commercial solutions based on artificial intelligence techniques are now available for sale, and radiology practices have to learn how to properly assess these tools. Decision-making criteria can be complex and involve several actors in the institution.
  • The ECLAIR guidelines propose a framework focusing on practical points to consider when assessing an AI solution in medical imaging, allowing all stakeholders to conduct relevant discussions with manufacturers and reach an informed decision as to whether to purchase an AI commercial solution for imaging applications.

Article: To buy or not to buy—evaluating commercial AI solutions in radiology (the ECLAIR guidelines)

Authors: Patrick Omoumi, Alexis Ducarouge, Antoine Tournier, Hugh Harvey, Charles E. Kahn Jr, Fanny Louvet-de Verchère, Daniel Pinto Dos Santos, Tobias Kober & Jonas Richiardi

Latest posts