Cybersecurity for AI Systems: A Survey
Abstrak
Recent advances in machine learning have created an opportunity to embed artificial intelligence in software-intensive systems. These artificial intelligence systems, however, come with a new set of vulnerabilities making them potential targets for cyberattacks. This research examines the landscape of these cyber attacks and organizes them into a taxonomy. It further explores potential defense mechanisms to counter such attacks and the use of these mechanisms early during the development life cycle to enhance the safety and security of artificial intelligence systems.
Artikel Ilmiah Terkait
Corinne Fair M. Kuzlu Ozgur Guler
24 Februari 2021
In recent years, the use of the Internet of Things (IoT) has increased exponentially, and cybersecurity concerns have increased along with it. On the cutting edge of cybersecurity is Artificial Intelligence (AI), which is used for the development of complex algorithms to protect networks and systems, including IoT systems. However, cyber-attackers have figured out how to exploit AI and have even begun to use adversarial AI in order to carry out cybersecurity attacks. This review paper compiles information from several other surveys and research papers regarding IoT, AI, and attacks with and against AI and explores the relationship between these three topics with the purpose of comprehensively presenting and summarizing relevant literature in these fields.
Iqbal H. Sarker
10 Januari 2023
Due to the rising dependency on digital technology, cybersecurity has emerged as a more prominent field of research and application that typically focuses on securing devices, networks, systems, data and other resources from various cyber‐attacks, threats, risks, damages, or unauthorized access. Artificial intelligence (AI), also referred to as a crucial technology of the current Fourth Industrial Revolution (Industry 4.0 or 4IR), could be the key to intelligently dealing with these cyber issues. Various forms of AI methodologies, such as analytical, functional, interactive, textual as well as visual AI can be employed to get the desired cyber solutions according to their computational capabilities. However, the dynamic nature and complexity of real‐world situations and data gathered from various cyber sources make it challenging nowadays to build an effective AI‐based security model. Moreover, defending robustly against adversarial attacks is still an open question in the area. In this article, we provide a comprehensive view on “Cybersecurity Intelligence and Robustness,” emphasizing multi‐aspects AI‐based modeling and adversarial learning that could lead to addressing diverse issues in various cyber applications areas such as detecting malware or intrusions, zero‐day attacks, phishing, data breach, cyberbullying and other cybercrimes. Thus the eventual security modeling process could be automated, intelligent, and robust compared to traditional security systems. We also emphasize and draw attention to the future aspects of cybersecurity intelligence and robustness along with the research direction within the context of our study. Overall, our goal is not only to explore AI‐based modeling and pertinent methodologies but also to focus on the resulting model's applicability for securing our digital systems and society.
Luca Ferretti M. Andreolini Mirco Marchetti + 2 lainnya
17 Juni 2021
The incremental diffusion of machine learning algorithms in supporting cybersecurity is creating novel defensive opportunities but also new types of risks. Multiple researches have shown that machine learning methods are vulnerable to adversarial attacks that create tiny perturbations aimed at decreasing the effectiveness of detecting threats. We observe that existing literature assumes threat models that are inappropriate for realistic cybersecurity scenarios, because they consider opponents with complete knowledge about the cyber detector or that can freely interact with the target systems. By focusing on Network Intrusion Detection Systems based on machine learning, we identify and model the real capabilities and circumstances required by attackers to carry out feasible and successful adversarial attacks. We then apply our model to several adversarial attacks proposed in literature and highlight the limits and merits that can result in actual adversarial attacks. The contributions of this article can help hardening defensive systems by letting cyber defenders address the most critical and real issues and can benefit researchers by allowing them to devise novel forms of adversarial attacks based on realistic threat models.
Giovanni Apruzzese Fabio Di Franco Wissam Mallouli + 4 lainnya
20 Juni 2022
Machine Learning (ML) represents a pivotal technology for current and future information systems, and many domains already leverage the capabilities of ML. However, deployment of ML in cybersecurity is still at an early stage, revealing a significant discrepancy between research and practice. Such a discrepancy has its root cause in the current state of the art, which does not allow us to identify the role of ML in cybersecurity. The full potential of ML will never be unleashed unless its pros and cons are understood by a broad audience. This article is the first attempt to provide a holistic understanding of the role of ML in the entire cybersecurity domain—to any potential reader with an interest in this topic. We highlight the advantages of ML with respect to human-driven detection methods, as well as the additional tasks that can be addressed by ML in cybersecurity. Moreover, we elucidate various intrinsic problems affecting real ML deployments in cybersecurity. Finally, we present how various stakeholders can contribute to future developments of ML in cybersecurity, which is essential for further progress in this field. Our contributions are complemented with two real case studies describing industrial applications of ML as defense against cyber-threats.
Felix O. Olowononi D. Rawat Chunmei Liu
14 Februari 2021
Cyber Physical Systems (CPS) are characterized by their ability to integrate the physical and information or cyber worlds. Their deployment in critical infrastructure have demonstrated a potential to transform the world. However, harnessing this potential is limited by their critical nature and the far reaching effects of cyber attacks on human, infrastructure and the environment. An attraction for cyber concerns in CPS rises from the process of sending information from sensors to actuators over the wireless communication medium, thereby widening the attack surface. Traditionally, CPS security has been investigated from the perspective of preventing intruders from gaining access to the system using cryptography and other access control techniques. Most research work have therefore focused on the detection of attacks in CPS. However, in a world of increasing adversaries, it is becoming more difficult to totally prevent CPS from adversarial attacks, hence the need to focus on making CPS resilient. Resilient CPS are designed to withstand disruptions and remain functional despite the operation of adversaries. One of the dominant methodologies explored for building resilient CPS is dependent on machine learning (ML) algorithms. However, rising from recent research in adversarial ML, we posit that ML algorithms for securing CPS must themselves be resilient. This article is therefore aimed at comprehensively surveying the interactions between resilient CPS using ML and resilient ML when applied in CPS. The paper concludes with a number of research trends and promising future research directions. Furthermore, with this article, readers can have a thorough understanding of recent advances on ML-based security and securing ML for CPS and countermeasures, as well as research trends in this active research area.
Daftar Referensi
0 referensiTidak ada referensi ditemukan.
Artikel yang Mensitasi
0 sitasiTidak ada artikel yang mensitasi.