An Overview of Artificial Intelligence Ethics
Abstrak
Artificial intelligence (AI) has profoundly changed and will continue to change our lives. AI is being applied in more and more fields and scenarios such as autonomous driving, medical care, media, finance, industrial robots, and internet services. The widespread application of AI and its deep integration with the economy and society have improved efficiency and produced benefits. At the same time, it will inevitably impact the existing social order and raise ethical concerns. Ethical issues, such as privacy leakage, discrimination, unemployment, and security risks, brought about by AI systems have caused great trouble to people. Therefore, AI ethics, which is a field related to the study of ethical issues in AI, has become not only an important research topic in academia, but also an important topic of common concern for individuals, organizations, countries, and society. This article will give a comprehensive overview of this field by summarizing and analyzing the ethical risks and issues raised by AI, ethical guidelines and principles issued by different organizations, approaches for addressing ethical issues in AI, and methods for evaluating the ethics of AI. Additionally, challenges in implementing ethics in AI and some future perspectives are pointed out. We hope our work will provide a systematic and comprehensive overview of AI ethics for researchers and practitioners in this field, especially the beginners of this research discipline.
Artikel Ilmiah Terkait
Ádám Nagy Hannah Hilligoss Nele Achten + 2 lainnya
15 Januari 2020
The rapid spread of artificial intelligence (AI) systems has precipitated a rise in ethical and human rights-based frameworks intended to guide the development and use of these technologies. Despite the proliferation of these "AI principles," there has been little scholarly focus on understanding these efforts either individually or as contextualized within an expanding universe of principles with discernible trends. To that end, this white paper and its associated data visualization compare the contents of thirty-six prominent AI principles documents side-by-side. This effort uncovered a growing consensus around eight key thematic trends: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. Underlying this “normative core,” our analysis examined the forty-seven individual principles that make up the themes, detailing notable similarities and differences in interpretation found across the documents. In sharing these observations, it is our hope that policymakers, advocates, scholars, and others working to maximize the benefits and minimize the harms of AI will be better positioned to build on existing efforts and to push the fractured, global conversation on the future of AI toward consensus.
Ruth Müller Sami Haddadin A. Buyx + 3 lainnya
26 Januari 2022
The emergence of ethical concerns surrounding artificial intelligence (AI) has led to an explosion of high-level ethical principles being published by a wide range of public and private organizations. However, there is a need to consider how AI developers can be practically assisted to anticipate, identify and address ethical issues regarding AI technologies. This is particularly important in the development of AI intended for healthcare settings, where applications will often interact directly with patients in various states of vulnerability. In this paper, we propose that an ‘embedded ethics’ approach, in which ethicists and developers together address ethical issues via an iterative and continuous process from the outset of development, could be an effective means of integrating robust ethical considerations into the practical development of medical AI.
Matthias Braun Hannah Bleher
26 Mei 2023
Critics currently argue that applied ethics approaches to artificial intelligence (AI) are too principles-oriented and entail a theory–practice gap. Several applied ethical approaches try to prevent such a gap by conceptually translating ethical theory into practice. In this article, we explore how the currently most prominent approaches of AI ethics translate ethics into practice. Therefore, we examine three approaches to applied AI ethics: the embedded ethics approach, the ethically aligned approach, and the Value Sensitive Design (VSD) approach. We analyze each of these three approaches by asking how they understand and conceptualize theory and practice. We outline the conceptual strengths as well as their shortcomings: an embedded ethics approach is context-oriented but risks being biased by it; ethically aligned approaches are principles-oriented but lack justification theories to deal with trade-offs between competing principles; and the interdisciplinary Value Sensitive Design approach is based on stakeholder values but needs linkage to political, legal, or social governance aspects. Against this background, we develop a meta-framework for applied AI ethics conceptions with three dimensions. Based on critical theory, we suggest these dimensions as starting points to critically reflect on the conceptualization of theory and practice. We claim, first, that the inclusion of the dimension of affects and emotions in the ethical decision-making process stimulates reflections on vulnerabilities, experiences of disregard, and marginalization already within the AI development process. Second, we derive from our analysis that considering the dimension of justifying normative background theories provides both standards and criteria as well as guidance for prioritizing or evaluating competing principles in cases of conflict. Third, we argue that reflecting the governance dimension in ethical decision-making is an important factor to reveal power structures as well as to realize ethical AI and its application because this dimension seeks to combine social, legal, technical, and political concerns. This meta-framework can thus serve as a reflective tool for understanding, mapping, and assessing the theory–practice conceptualizations within AI ethics approaches to address and overcome their blind spots.
Belle Dang Yvonne Hong Andy Nguyen + 2 lainnya
13 Oktober 2022
The advancement of artificial intelligence in education (AIED) has the potential to transform the educational landscape and influence the role of all involved stakeholders. In recent years, the applications of AIED have been gradually adopted to progress our understanding of students’ learning and enhance learning performance and experience. However, the adoption of AIED has led to increasing ethical risks and concerns regarding several aspects such as personal data and learner autonomy. Despite the recent announcement of guidelines for ethical and trustworthy AIED, the debate revolves around the key principles underpinning ethical AIED. This paper aims to explore whether there is a global consensus on ethical AIED by mapping and analyzing international organizations’ current policies and guidelines. In this paper, we first introduce the opportunities offered by AI in education and potential ethical issues. Then, thematic analysis was conducted to conceptualize and establish a set of ethical principles by examining and synthesizing relevant ethical policies and guidelines for AIED. We discuss each principle and associated implications for relevant educational stakeholders, including students, teachers, technology developers, policymakers, and institutional decision-makers. The proposed set of ethical principles is expected to serve as a framework to inform and guide educational stakeholders in the development and deployment of ethical and trustworthy AIED as well as catalyze future development of related impact studies in the field.
Kush R. Varshney Huan Liu Lu Cheng
1 Januari 2021
In the current era, people and society have grown increasingly reliant on artificial intelligence (AI) technologies. AI has the potential to drive us towards a future in which all of humanity flourishes. It also comes with substantial risks for oppression and calamity. Discussions about whether we should (re)trust AI have repeatedly emerged in recent years and in many quarters, including industry, academia, healthcare, services, and so on. Technologists and AI researchers have a responsibility to develop trustworthy AI systems. They have responded with great effort to design more responsible AI algorithms. However, existing technical solutions are narrow in scope and have been primarily directed towards algorithms for scoring or classification tasks, with an emphasis on fairness and unwanted bias. To build long-lasting trust between AI and human beings, we argue that the key is to think beyond algorithmic fairness and connect major aspects of AI that potentially cause AI’s indifferent behavior. In this survey, we provide a systematic framework of Socially Responsible AI Algorithms that aims to examine the subjects of AI indifference and the need for socially responsible AI algorithms, define the objectives, and introduce the means by which we may achieve these objectives. We further discuss how to leverage this framework to improve societal well-being through protection, information, and prevention/mitigation. This article appears in the special track on AI & Society.
Daftar Referensi
0 referensiTidak ada referensi ditemukan.
Artikel yang Mensitasi
0 sitasiTidak ada artikel yang mensitasi.