Redefining Radiology: A Review of Artificial Intelligence Integration in Medical Imaging
Abstrak
This comprehensive review unfolds a detailed narrative of Artificial Intelligence (AI) making its foray into radiology, a move that is catalysing transformational shifts in the healthcare landscape. It traces the evolution of radiology, from the initial discovery of X-rays to the application of machine learning and deep learning in modern medical image analysis. The primary focus of this review is to shed light on AI applications in radiology, elucidating their seminal roles in image segmentation, computer-aided diagnosis, predictive analytics, and workflow optimisation. A spotlight is cast on the profound impact of AI on diagnostic processes, personalised medicine, and clinical workflows, with empirical evidence derived from a series of case studies across multiple medical disciplines. However, the integration of AI in radiology is not devoid of challenges. The review ventures into the labyrinth of obstacles that are inherent to AI-driven radiology—data quality, the ’black box’ enigma, infrastructural and technical complexities, as well as ethical implications. Peering into the future, the review contends that the road ahead for AI in radiology is paved with promising opportunities. It advocates for continuous research, embracing avant-garde imaging technologies, and fostering robust collaborations between radiologists and AI developers. The conclusion underlines the role of AI as a catalyst for change in radiology, a stance that is firmly rooted in sustained innovation, dynamic partnerships, and a steadfast commitment to ethical responsibility.
Artikel Ilmiah Terkait
Luís Pinto-Coelho
1 Desember 2023
The integration of artificial intelligence (AI) into medical imaging has guided in an era of transformation in healthcare. This literature review explores the latest innovations and applications of AI in the field, highlighting its profound impact on medical diagnosis and patient care. The innovation segment explores cutting-edge developments in AI, such as deep learning algorithms, convolutional neural networks, and generative adversarial networks, which have significantly improved the accuracy and efficiency of medical image analysis. These innovations have enabled rapid and accurate detection of abnormalities, from identifying tumors during radiological examinations to detecting early signs of eye disease in retinal images. The article also highlights various applications of AI in medical imaging, including radiology, pathology, cardiology, and more. AI-based diagnostic tools not only speed up the interpretation of complex images but also improve early detection of disease, ultimately delivering better outcomes for patients. Additionally, AI-based image processing facilitates personalized treatment plans, thereby optimizing healthcare delivery. This literature review highlights the paradigm shift that AI has brought to medical imaging, highlighting its role in revolutionizing diagnosis and patient care. By combining cutting-edge AI techniques and their practical applications, it is clear that AI will continue shaping the future of healthcare in profound and positive ways.
M. Klontzas Ali S. Tejani John T Mongan + 4 lainnya
29 Mei 2024
"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. ©RSNA, 2024.
A. Piórkowski Michał Strzelecki R. Obuchowicz
1 Mei 2024
Artificial intelligence (AI) is currently becoming a leading field in data processing [...].
Heather M. Whitney Sanmi Koyejo K. Drukker + 9 lainnya
26 April 2023
Abstract. Purpose To recognize and address various sources of bias essential for algorithmic fairness and trustworthiness and to contribute to a just and equitable deployment of AI in medical imaging, there is an increasing interest in developing medical imaging-based machine learning methods, also known as medical imaging artificial intelligence (AI), for the detection, diagnosis, prognosis, and risk assessment of disease with the goal of clinical implementation. These tools are intended to help improve traditional human decision-making in medical imaging. However, biases introduced in the steps toward clinical deployment may impede their intended function, potentially exacerbating inequities. Specifically, medical imaging AI can propagate or amplify biases introduced in the many steps from model inception to deployment, resulting in a systematic difference in the treatment of different groups. Approach Our multi-institutional team included medical physicists, medical imaging artificial intelligence/machine learning (AI/ML) researchers, experts in AI/ML bias, statisticians, physicians, and scientists from regulatory bodies. We identified sources of bias in AI/ML, mitigation strategies for these biases, and developed recommendations for best practices in medical imaging AI/ML development. Results Five main steps along the roadmap of medical imaging AI/ML were identified: (1) data collection, (2) data preparation and annotation, (3) model development, (4) model evaluation, and (5) model deployment. Within these steps, or bias categories, we identified 29 sources of potential bias, many of which can impact multiple steps, as well as mitigation strategies. Conclusions Our findings provide a valuable resource to researchers, clinicians, and the public at large.
J. Caspers
25 April 2021
During the last decade, data science technologies such as artificial intelligence (AI) and radiomics have emerged strongly in radiologic research. Radiomics refers to the (automated) extraction of a large number of quantitative features from medical images [1]. A typical radiomics workflow involves image acquisition and segmentation as well as feature extraction and prioritization/reduction as preparation for its ultimate goal, which is predictive modeling [2]. This final step is where radiomics and AI typically intertwine to build a gainful symbiosis. In recent years, the field of medical imaging has seen a rising number of publications on radiomics and AI applications with increasingly refined methodologies [3, 4]. Formulation of best practice white papers and quality criteria for publications on predictive modeling like the TRIPODS [5] or CLAIM [6] criteria have substantially promoted this qualitative gain. Consequently, relevant methodological approaches advancing generalizability of predictive models are increasingly being observed in recent publications, e.g., the accurate composition of representative and unbiased datasets, avoidance of data leakage, the incorporation of (nested) crossvalidation approaches for model development, particularly on small datasets, or the use of independent, external test samples. In this regard, the work of Song et al [7] on a clinicalradiomics nomogram for prediction of functional outcome in intracranial hemorrhage published in the current issue of European Radiology is just one example for the general trend. However, in contrast to the rising utilization and importance of predictive modeling in medical imaging research, these technologies have not been widely adopted in clinical routine. Beside regulatory, medicolegal, or ethical issues, one of the major hurdles for a broad usage of AI and predictive models is the lack of trust in these technologies by medical practitioners, healthcare stakeholders, and patients. After more than a decade of scientific progress on AI and predictive modeling in medical imaging, we should now take the opportunity to focus our research on the trustworthiness of AI and predictive modeling in order to trailblaze their translation into clinical practice. Several prospects could enhance trustworthiness of predictive models for clinical use. One of the main factors will be transparency on their reliability in real-world applications. Large multicentric prospective trials will be paramount to assess and validate the performance and especially generalizability of predictive models in a robust and minimally biased fashion. Additionally, benchmarking of AI tools by independent institutions on external heterogeneous real-world data would provide transparency on model performances and enhance trust. In general, trust in new technologies is severely influenced by the comprehensibility of these techniques for their users. In the field of predictive modeling, this topic is often described with the term “explainable AI,” which is being increasingly considered in current research [8]. Explainable AI seeks to unravel the “black-box” nature of many predictive models, including artificial neural networks, by making decision processes comprehendible, e.g., by revealing the features that drive their decisions. Trust in predictive models will therefore substantially increase, when models are developed transparently and AI systems made comprehensible. Another issue of current AI tools is that theymainly incorporate narrowAI, i.e., they address only one very specific task. We are currently miles, if not light-years away, from building real strong AI, that is, artificial intelligence having the capacity to learn any intellectual task that a human being can. However, building more comprehensive AI systems solving multiple predictive tasks might enhance their trustworthiness for users. For example, a user might be inclined to follow thoughts along the line of “I have good experience in this system predicting the This comment refers to the article available at https://doi.org/10.1007/ s00330-021-07828-7.
Daftar Referensi
0 referensiTidak ada referensi ditemukan.
Artikel yang Mensitasi
0 sitasiTidak ada artikel yang mensitasi.