A Review of Causality for Learning Algorithms in Medical Image Analysis
Abstrak
Medical image analysis is a vibrant research area that offers doctors and medical practitioners invaluable insight and the ability to accurately diagnose and monitor disease. Machine learning provides an additional boost for this area. However, machine learning for medical image analysis is particularly vulnerable to natural biases like domain shifts that affect algorithmic performance and robustness. In this paper we analyze machine learning for medical image analysis within the framework of Technology Readiness Levels and review how causal analysis methods can fill a gap when creating robust and adaptable medical image analysis algorithms.We review methods using causality in medical imaging AI/ML and find that causal analysis has the potential to mitigate critical problems for clinical translation but that uptake and clinical downstream research has been limited so far.
Artikel Ilmiah Terkait
V. Cheplygina G. Varoquaux
12 April 2022
Research in computer analysis of medical images bears many promises to improve patients’ health. However, a number of systematic challenges are slowing down the progress of the field, from limitations of the data, such as biases, to research incentives, such as optimizing for publication. In this paper we review roadblocks to developing and assessing methods. Building our analysis on evidence from the literature and data challenges, we show that at every step, potential biases can creep in. On a positive note, we also discuss on-going efforts to counteract these problems. Finally we provide recommendations on how to further address these problems in the future.
C. Camargo L. Liang Y. Raita + 1 lainnya
6 Juli 2021
Clinicians handle a growing amount of clinical, biometric, and biomarker data. In this “big data” era, there is an emerging faith that the answer to all clinical and scientific questions reside in “big data” and that data will transform medicine into precision medicine. However, data by themselves are useless. It is the algorithms encoding causal reasoning and domain (e.g., clinical and biological) knowledge that prove transformative. The recent introduction of (health) data science presents an opportunity to re-think this data-centric view. For example, while precision medicine seeks to provide the right prevention and treatment strategy to the right patients at the right time, its realization cannot be achieved by algorithms that operate exclusively in data-driven prediction modes, as do most machine learning algorithms. Better understanding of data science and its tasks is vital to interpret findings and translate new discoveries into clinical practice. In this review, we first discuss the principles and major tasks of data science by organizing it into three defining tasks: (1) association and prediction, (2) intervention, and (3) counterfactual causal inference. Second, we review commonly-used data science tools with examples in the medical literature. Lastly, we outline current challenges and future directions in the fields of medicine, elaborating on how data science can enhance clinical effectiveness and inform medical practice. As machine learning algorithms become ubiquitous tools to handle quantitatively “big data,” their integration with causal reasoning and domain knowledge is instrumental to qualitatively transform medicine, which will, in turn, improve health outcomes of patients.
Heather M. Whitney Sanmi Koyejo K. Drukker + 9 lainnya
26 April 2023
Abstract. Purpose To recognize and address various sources of bias essential for algorithmic fairness and trustworthiness and to contribute to a just and equitable deployment of AI in medical imaging, there is an increasing interest in developing medical imaging-based machine learning methods, also known as medical imaging artificial intelligence (AI), for the detection, diagnosis, prognosis, and risk assessment of disease with the goal of clinical implementation. These tools are intended to help improve traditional human decision-making in medical imaging. However, biases introduced in the steps toward clinical deployment may impede their intended function, potentially exacerbating inequities. Specifically, medical imaging AI can propagate or amplify biases introduced in the many steps from model inception to deployment, resulting in a systematic difference in the treatment of different groups. Approach Our multi-institutional team included medical physicists, medical imaging artificial intelligence/machine learning (AI/ML) researchers, experts in AI/ML bias, statisticians, physicians, and scientists from regulatory bodies. We identified sources of bias in AI/ML, mitigation strategies for these biases, and developed recommendations for best practices in medical imaging AI/ML development. Results Five main steps along the roadmap of medical imaging AI/ML were identified: (1) data collection, (2) data preparation and annotation, (3) model development, (4) model evaluation, and (5) model deployment. Within these steps, or bias categories, we identified 29 sources of potential bias, many of which can impact multiple steps, as well as mitigation strategies. Conclusions Our findings provide a valuable resource to researchers, clinicians, and the public at large.
Luís Pinto-Coelho
1 Desember 2023
The integration of artificial intelligence (AI) into medical imaging has guided in an era of transformation in healthcare. This literature review explores the latest innovations and applications of AI in the field, highlighting its profound impact on medical diagnosis and patient care. The innovation segment explores cutting-edge developments in AI, such as deep learning algorithms, convolutional neural networks, and generative adversarial networks, which have significantly improved the accuracy and efficiency of medical image analysis. These innovations have enabled rapid and accurate detection of abnormalities, from identifying tumors during radiological examinations to detecting early signs of eye disease in retinal images. The article also highlights various applications of AI in medical imaging, including radiology, pathology, cardiology, and more. AI-based diagnostic tools not only speed up the interpretation of complex images but also improve early detection of disease, ultimately delivering better outcomes for patients. Additionally, AI-based image processing facilitates personalized treatment plans, thereby optimizing healthcare delivery. This literature review highlights the paradigm shift that AI has brought to medical imaging, highlighting its role in revolutionizing diagnosis and patient care. By combining cutting-edge AI techniques and their practical applications, it is clear that AI will continue shaping the future of healthcare in profound and positive ways.
J. Caspers
25 April 2021
During the last decade, data science technologies such as artificial intelligence (AI) and radiomics have emerged strongly in radiologic research. Radiomics refers to the (automated) extraction of a large number of quantitative features from medical images [1]. A typical radiomics workflow involves image acquisition and segmentation as well as feature extraction and prioritization/reduction as preparation for its ultimate goal, which is predictive modeling [2]. This final step is where radiomics and AI typically intertwine to build a gainful symbiosis. In recent years, the field of medical imaging has seen a rising number of publications on radiomics and AI applications with increasingly refined methodologies [3, 4]. Formulation of best practice white papers and quality criteria for publications on predictive modeling like the TRIPODS [5] or CLAIM [6] criteria have substantially promoted this qualitative gain. Consequently, relevant methodological approaches advancing generalizability of predictive models are increasingly being observed in recent publications, e.g., the accurate composition of representative and unbiased datasets, avoidance of data leakage, the incorporation of (nested) crossvalidation approaches for model development, particularly on small datasets, or the use of independent, external test samples. In this regard, the work of Song et al [7] on a clinicalradiomics nomogram for prediction of functional outcome in intracranial hemorrhage published in the current issue of European Radiology is just one example for the general trend. However, in contrast to the rising utilization and importance of predictive modeling in medical imaging research, these technologies have not been widely adopted in clinical routine. Beside regulatory, medicolegal, or ethical issues, one of the major hurdles for a broad usage of AI and predictive models is the lack of trust in these technologies by medical practitioners, healthcare stakeholders, and patients. After more than a decade of scientific progress on AI and predictive modeling in medical imaging, we should now take the opportunity to focus our research on the trustworthiness of AI and predictive modeling in order to trailblaze their translation into clinical practice. Several prospects could enhance trustworthiness of predictive models for clinical use. One of the main factors will be transparency on their reliability in real-world applications. Large multicentric prospective trials will be paramount to assess and validate the performance and especially generalizability of predictive models in a robust and minimally biased fashion. Additionally, benchmarking of AI tools by independent institutions on external heterogeneous real-world data would provide transparency on model performances and enhance trust. In general, trust in new technologies is severely influenced by the comprehensibility of these techniques for their users. In the field of predictive modeling, this topic is often described with the term “explainable AI,” which is being increasingly considered in current research [8]. Explainable AI seeks to unravel the “black-box” nature of many predictive models, including artificial neural networks, by making decision processes comprehendible, e.g., by revealing the features that drive their decisions. Trust in predictive models will therefore substantially increase, when models are developed transparently and AI systems made comprehensible. Another issue of current AI tools is that theymainly incorporate narrowAI, i.e., they address only one very specific task. We are currently miles, if not light-years away, from building real strong AI, that is, artificial intelligence having the capacity to learn any intellectual task that a human being can. However, building more comprehensive AI systems solving multiple predictive tasks might enhance their trustworthiness for users. For example, a user might be inclined to follow thoughts along the line of “I have good experience in this system predicting the This comment refers to the article available at https://doi.org/10.1007/ s00330-021-07828-7.
Daftar Referensi
0 referensiTidak ada referensi ditemukan.
Artikel yang Mensitasi
0 sitasiTidak ada artikel yang mensitasi.