DOI: 10.1109/ACCESS.2023.3262138
Terbit pada 4 Mei 2022 Pada IEEE Access

Machine Learning Operations (MLOps): Overview, Definition, and Architecture

Sebastian Hirschl Niklas Kühl Dominik Kreuzberger

Abstrak

The final goal of all industrial machine learning (ML) projects is to develop ML products and rapidly bring them into production. However, it is highly challenging to automate and operationalize ML products and thus many ML endeavors fail to deliver on their expectations. The paradigm of Machine Learning Operations (MLOps) addresses this issue. MLOps includes several aspects, such as best practices, sets of concepts, and development culture. However, MLOps is still a vague term and its consequences for researchers and professionals are ambiguous. To address this gap, we conduct mixed-method research, including a literature review, a tool review, and expert interviews. As a result of these investigations, we contribute to the body of knowledge by providing an aggregated overview of the necessary principles, components, and roles, as well as the associated architecture and workflows. Furthermore, we provide a comprehensive definition of MLOps and highlight open challenges in the field. Finally, this work provides guidance for ML researchers and practitioners who want to automate and operate their ML products with a designated set of technologies.

Artikel Ilmiah Terkait

Applying DevOps Practices of Continuous Automation for Machine Learning

Saeed Albarhami Charalampos Apostolopoulos I. Karamitsos

13 Juli 2020

This paper proposes DevOps practices for machine learning application, integrating both the development and operation environment seamlessly. The machine learning processes of development and deployment during the experimentation phase may seem easy. However, if not carefully designed, deploying and using such models may lead to a complex, time-consuming approaches which may require significant and costly efforts for maintenance, improvement, and monitoring. This paper presents how to apply continuous integration (CI) and continuous delivery (CD) principles, practices, and tools so as to minimize waste, support rapid feedback loops, explore the hidden technical debt, improve value delivery and maintenance, and improve operational functions for real-world machine learning applications.

Data Science Through the Looking Glass

C. Curino Bojan Karlas Markus Weimer + 10 lainnya

29 Juli 2022

The recent success of machine learning (ML) has led to an explosive growth of systems and applications built by an ever-growing community of system builders and data science (DS) practitioners. This quickly shifting panorama, however, is challenging for system builders and practitioners alike to follow. In this paper, we set out to capture this panorama through a wide-angle lens, performing the largest analysis of DS projects to date, focusing on questions that can advance our understanding of the field and determine investments. Specifically, we download and analyze (a) over 8M notebooks publicly available on GITHUB and (b) over 2M enterprise ML pipelines developed within Microsoft. Our analysis includes coarse-grained statistical characterizations, finegrained analysis of libraries and pipelines, and comparative studies across datasets and time. We report a large number of measurements for our readers to interpret and draw actionable conclusions on (a) what system builders should focus on to better serve practitioners and (b) what technologies should practitioners rely on.

Software-Engineering Design Patterns for Machine Learning Applications

H. Takeuchi H. Washizaki Foutse Khomh + 4 lainnya

1 Maret 2022

In this study, a multivocal literature review identified 15 software-engineering design patterns for machine learning applications. Findings suggest that there are opportunities to increase the patterns’ adoption in practice by raising awareness of such patterns within the community.

ML DevOps Adoption in Practice: A Mixed-Method Study of Implementation Patterns and Organizational Benefits

R. DileepkumarS Juby Mathew

8 Februari 2025

Machine Learning (ML) DevOps, also known as MLOps, has emerged as a critical framework for efficiently operationalizing ML models in various industries. This study investigates the adoption trends, implementation efforts, and benefits of ML DevOps through a combination of literature review and empirical analysis. By surveying 150 professionals across industries and conducting in-depth interviews with 20 practitioners, the study provides insights into the growing adoption of ML DevOps, particularly in sectors like finance and healthcare. The research identifies key challenges, such as fragmented tooling, data management complexities, and skill gaps, which hinder widespread adoption. However, the findings highlight significant benefits, including improved deployment frequency, reduced error rates, enhanced collaboration between data science and DevOps teams, and lower operational costs. Organizations leveraging ML DevOps report accelerated model deployment, increased scalability, and better compliance with industry regulations. The study also explores the technical and cultural efforts required for successful implementation, such as investments in automation tools, real-time monitoring, and upskilling initiatives. The results indicate that while challenges remain, ML DevOps presents a viable path to optimizing ML lifecycle management, ensuring model reliability, and enhancing business value. Future research should focus on standardizing ML DevOps practices, assessing the return on investment across industries, and developing frameworks for seamless integration with traditional DevOps methodologies

AITIA: Embedded AI Techniques for Embedded Industrial Applications

Kristof Van Beeck J. Lemeire N. Mentens + 10 lainnya

1 Agustus 2020

New achievements in Artificial Intelligence (AI) and Machine Learning (ML) are reported almost daily by the big companies. While those achievements are accomplished by fast and massive data processing techniques, the potential of embedded machine learning, where intelligent algorithms run in resource-constrained devices rather than in the cloud, is still not understood well by the majority of the industrial players and Small and Medium Entereprises (SMEs). Nevertheless, the potential embedded machine learning for processing high-performance algorithms without relying on expensive cloud solutions is perceived as very high. This potential has led to a broad demand by industry and SMEs for a practical and application-oriented feasibility study, which helps them to understand the potential benefits, but also the limitations of embedded AI. To address these needs, this paper presents the approach of the AITIA project, a consortium of four Universities which aims at developing and demonstrating best practices for embedded AI by means of four industrial case studies of high-relevance to the European industry and SMEs: sensors, security, automotive and industry 4.0.

Daftar Referensi

0 referensi

Tidak ada referensi ditemukan.

Artikel yang Mensitasi

1 sitasi