DOI: 10.1609/aies.v7i1.31613
Terbit pada 16 Oktober 2024 Pada AAAI/ACM Conference on AI, Ethics, and Society

All Too Human? Mapping and Mitigating the Risk from Anthropomorphic AI

Laura Weidinger Iason Gabriel Canfer Akbulut + 2 penulis

Abstrak

The development of highly-capable conversational agents, underwritten by large language models, has the potential to shape user interaction with this technology in profound ways, particularly when the technology is anthropomorphic, or appears human-like. Although the effects of anthropomorphic AI are often benign, anthropomorphic design features also create new kinds of risk. For example, users may form emotional connections to human-like AI, creating the risk of infringing on user privacy and autonomy through over-reliance. To better understand the possible pitfalls of anthropomorphic AI systems, we make two contributions: first, we explore anthropomorphic features that have been embedded in interactive systems in the past, and leverage this precedent to highlight the current implications of anthropomorphic design. Second, we propose research directions for informing the ethical design of anthropomorphic AI. In advancing the responsible development of AI, we promote approaches to the ethical foresight, evaluation, and mitigation of harms arising from user interactions with anthropomorphic AI.

Artikel Ilmiah Terkait

Deceptive AI Ecosystems: The Case of ChatGPT

Ştefan Sarkadi Yifan Xu Xiao Zhan

18 Juni 2023

ChatGPT, an AI chatbot, has gained popularity for its capability in generating human-like responses. However, this feature carries several risks, most notably due to its deceptive behaviour such as offering users misleading or fabricated information that could further cause ethical issues. To better understand the impact of ChatGPT on our social, cultural, economic, and political interactions, it is crucial to investigate how ChatGPT operates in the real world where various societal pressures influence its development and deployment. This paper emphasizes the need to study ChatGPT "in the wild", as part of the ecosystem it is embedded in, with a strong focus on user involvement. We examine the ethical challenges stemming from ChatGPT’s deceptive human-like interactions and propose a roadmap for developing more transparent and trustworthy chatbots. Central to our approach is the importance of proactive risk assessment and user participation in shaping the future of chatbot technology.

The Illusion of Empathy? Notes on Displays of Emotion in Human-Computer Interaction

J. Landay Andrea Cuadra Nicola Dell + 4 lainnya

11 Mei 2024

From ELIZA to Alexa, Conversational Agents (CAs) have been deliberately designed to elicit or project empathy. Although empathy can help technology better serve human needs, it can also be deceptive and potentially exploitative. In this work, we characterize empathy in interactions with CAs, highlighting the importance of distinguishing evocations of empathy between two humans from ones between a human and a CA. To this end, we systematically prompt CAs backed by large language models (LLMs) to display empathy while conversing with, or about, 65 distinct human identities, and also compare how different LLMs display or model empathy. We find that CAs make value judgments about certain identities, and can be encouraging of identities related to harmful ideologies (e.g., Nazism and xenophobia). Moreover, a computational approach to understanding empathy reveals that despite their ability to display empathy, CAs do poorly when interpreting and exploring a user’s experience, contrasting with their human counterparts.

On the Design of and Interaction with Conversational Agents: An Organizing and Assessing Review of Human-Computer Interaction Research

L. Kolbe Stephan Diederich A. Brendel + 1 lainnya

2022

Conversational agents (CAs), described as software with which humans interact through natural language, have increasingly attracted interest in both academia and practice because of improved capabilities driven by advances in artificial intelligence and, specifically, natural language processing. CAs are used in contexts such as peoples private lives, education, and healthcare, as well as in organizations to innovate or automate tasks for example, in marketing, sales, or customer service. In addition to these application contexts, CAs take on different forms in terms of their embodiment, the communication mode, and their (often human-like) design. Despite their popularity, many CAs are unable to fulfill expectations, and fostering a positive user experience is challenging. To better understand how CAs can be designed to fulfill their intended purpose and how humans interact with them, a number of studies focusing on human-computer interaction have been carried out in recent years, which have contributed to our understanding of this technology. However, currently, a structured overview of this research is lacking, thus impeding the systematic identification of research gaps and knowledge on which future studies can build. To address this issue, we conducted an organizing and assessing review of 262 studies, applying a sociotechnical lens to analyze CA research regarding user interaction, context, agent design, as well as CA perceptions and outcomes. This study contributes an overview of the status quo of CA research, identifies four research streams through cluster analysis, and proposes a research agenda comprising six avenues and sixteen directions to move the field forward

Relationship Development with Humanoid Social Robots: Applying Interpersonal Theories to Human-Robot Interaction

Jesse Fox Andrew Gambino

11 Januari 2021

Humanoid social robots (HSRs) are human-made technologies that can take physical or digital form, resemble people in form or behavior to some degree, and are designed to interact with people. A common assumption is that social robots can and should mimic humans, such that human-robot interaction (HRI) closely resembles human-human (i.e., interpersonal) interaction. Research is often framed from the assumption that rules and theories that apply to interpersonal interaction should apply to HRI (e.g., the computers are social actors framework). Here, we challenge these assumptions and consider more deeply the relevance and applicability of our knowledge about personal relationships to relationships with social robots. First, we describe the typical characteristics of HSRs available to consumers currently, elaborating characteristics relevant to understanding social interactions with robots such as form anthropomorphism and behavioral anthropomorphism. We also consider common social affordances of modern HSRs (persistence, personalization, responsiveness, contingency, and conversational control) and how these align with human capacities and expectations. Next, we present predominant interpersonal theories whose primary claims are foundational to our understanding of human relationship development (social exchange theories, including resource theory, interdependence theory, equity theory, and social penetration theory). We consider whether interpersonal theories are viable frameworks for studying HRI and human-robot relationships given their theoretical assumptions and claims. We conclude by providing suggestions for researchers and designers, including alternatives to equating human-robot relationships to human-human relationships.

An Overview of Artificial Intelligence Ethics

Bifei Mao Zeqi Zhang X. Yao + 1 lainnya

1 Agustus 2023

Artificial intelligence (AI) has profoundly changed and will continue to change our lives. AI is being applied in more and more fields and scenarios such as autonomous driving, medical care, media, finance, industrial robots, and internet services. The widespread application of AI and its deep integration with the economy and society have improved efficiency and produced benefits. At the same time, it will inevitably impact the existing social order and raise ethical concerns. Ethical issues, such as privacy leakage, discrimination, unemployment, and security risks, brought about by AI systems have caused great trouble to people. Therefore, AI ethics, which is a field related to the study of ethical issues in AI, has become not only an important research topic in academia, but also an important topic of common concern for individuals, organizations, countries, and society. This article will give a comprehensive overview of this field by summarizing and analyzing the ethical risks and issues raised by AI, ethical guidelines and principles issued by different organizations, approaches for addressing ethical issues in AI, and methods for evaluating the ethics of AI. Additionally, challenges in implementing ethics in AI and some future perspectives are pointed out. We hope our work will provide a systematic and comprehensive overview of AI ethics for researchers and practitioners in this field, especially the beginners of this research discipline.

Daftar Referensi

0 referensi

Tidak ada referensi ditemukan.

Artikel yang Mensitasi

0 sitasi

Tidak ada artikel yang mensitasi.