Artificial Intelligence News

reveal bias in AI models for medical imaging


Artificial intelligence and machine learning (AI/ML) technologies continue to find new applications across several disciplines. Medicine is no exception, with AI/ML being used for diagnosis, prognosis, risk assessment, and assessment of treatment response in various diseases. In particular, AI/ML models are finding increasing application in medical image analysis. These include X-rays, computed tomography, and magnetic resonance images. A key requirement for the successful application of AI/ML models in medical imaging is ensuring their proper design, training and use. But in reality, it is very challenging to develop an AI/ML model that works well for all members of the population and can be generalized to all circumstances.

Credits: MIDRC, midrc.org/bias-awareness-tool.

Artificial intelligence and machine learning (AI/ML) technologies continue to find new applications across several disciplines. Medicine is no exception, with AI/ML being used for diagnosis, prognosis, risk assessment, and assessment of treatment response in various diseases. In particular, AI/ML models are finding increasing application in medical image analysis. These include X-rays, computed tomography, and magnetic resonance images. A key requirement for the successful application of AI/ML models in medical imaging is ensuring their proper design, training and use. But in reality, it is very challenging to develop an AI/ML model that works well for all members of the population and can be generalized to all circumstances.

Just like humans, AI/ML models can be biased, and can lead to different treatment of medically similar cases. Regardless of the factors associated with recognition of such bias, it is important to address it and ensure fairness, equity and trust in AI/ML for medical imaging. This requires identifying possible sources of bias in medical imaging AI/ML and developing strategies to mitigate them. Failure to do so can result in disparate benefits for patients, exacerbating inequalities in access to health.

As reported in Journal of Medical Imaging (JMI), a multi-institutional team of experts from the Medical Imaging and Data Resource Center (MIDRC), including medical physicists, AI/ML researchers, statisticians, physicians and scientists from regulatory agencies, is working on this issue. In this comprehensive report, they identify 29 potential sources of bias that can occur along the five main steps of developing and implementing medical imaging AI/ML from data collection, data preparation and annotation, model development, model evaluation, and model deployment, with multiple biases. identified as potentially occurring in more than one step. Mitigation strategies can be discussed, and information is also available on the MIDRC website.

One major source of bias lies in data collection. For example, taking images from one hospital or from one type of scanner can result in biased data collection. Data collection biases can also arise due to differences in the way certain social groups are treated, both during the study and within the healthcare system as a whole. In addition, data can become outdated as medical knowledge and practice evolves. This introduces a temporal bias in the AI/ML model trained on the data.

Another source of bias lies in the preparation and annotation of data and is closely related to data collection. At this step, biases can be introduced based on how the data is labeled before being fed into the AI/ML model for training. Such bias may stem from the personal bias of the annotator or from confusion regarding how the data itself is presented to users tasked with labeling it.

Bias can also arise during model development based on how the AI/ML model itself is reasoned and constructed. One example is inherent bias, which occurs when the output of a biased AI/ML model is used to train another model. Other examples of bias in model development include bias caused by unequal representation of the target population or originating from historical circumstances, such as societal and institutional biases that lead to discriminatory practices.

Model evaluation can also be a potential source of bias. Testing a model’s performance, for example, can introduce bias either by using an already biased data set for comparison or through using an inappropriate statistical model.

Finally, bias can also creep in during the implementation of AI/ML models in real settings, particularly from system users. For example, bias is introduced when the model is not used for the intended image category or configuration, or when the user becomes overly dependent on automation.

In addition to identifying and thoroughly describing these potential sources of bias, the team suggests possible ways of mitigation and best practices for implementing medical imaging AI/ML models. Therefore, the article provides valuable insights for researchers, clinicians and the general public on the limitations of AI/ML in medical imaging as well as a roadmap for its improvement in the near future. This, in turn, could facilitate a more equitable and equitable adoption of medical imaging AI/ML models in the future.

Read the Gold Open Access article by K. Drukker et al., “Towards fairness in artificial intelligence for medical image analysis: identification and mitigation of potential bias in the roadmap from data collection to model deployment,” J.Med. Picture. 10(6), 061104 (2023), doi 10.1117/1.JMI.10.6.061104.




Source link

Related Articles

Back to top button