DOI: 10.1145/3613904.3642336
Terbit pada 11 Mei 2024 Pada International Conference on Human Factors in Computing Systems

The Illusion of Empathy? Notes on Displays of Emotion in Human-Computer Interaction

J. Landay Andrea Cuadra Nicola Dell + 4 penulis

Abstrak

From ELIZA to Alexa, Conversational Agents (CAs) have been deliberately designed to elicit or project empathy. Although empathy can help technology better serve human needs, it can also be deceptive and potentially exploitative. In this work, we characterize empathy in interactions with CAs, highlighting the importance of distinguishing evocations of empathy between two humans from ones between a human and a CA. To this end, we systematically prompt CAs backed by large language models (LLMs) to display empathy while conversing with, or about, 65 distinct human identities, and also compare how different LLMs display or model empathy. We find that CAs make value judgments about certain identities, and can be encouraging of identities related to harmful ideologies (e.g., Nazism and xenophobia). Moreover, a computational approach to understanding empathy reveals that despite their ability to display empathy, CAs do poorly when interpreting and exploring a user’s experience, contrasting with their human counterparts.

Artikel Ilmiah Terkait

All Too Human? Mapping and Mitigating the Risk from Anthropomorphic AI

Laura Weidinger Iason Gabriel Canfer Akbulut + 2 lainnya

16 Oktober 2024

The development of highly-capable conversational agents, underwritten by large language models, has the potential to shape user interaction with this technology in profound ways, particularly when the technology is anthropomorphic, or appears human-like. Although the effects of anthropomorphic AI are often benign, anthropomorphic design features also create new kinds of risk. For example, users may form emotional connections to human-like AI, creating the risk of infringing on user privacy and autonomy through over-reliance. To better understand the possible pitfalls of anthropomorphic AI systems, we make two contributions: first, we explore anthropomorphic features that have been embedded in interactive systems in the past, and leverage this precedent to highlight the current implications of anthropomorphic design. Second, we propose research directions for informing the ethical design of anthropomorphic AI. In advancing the responsible development of AI, we promote approaches to the ethical foresight, evaluation, and mitigation of harms arising from user interactions with anthropomorphic AI.

Empathy in Human–Robot Interaction: Designing for Social Robots

Sung Park Mincheol Whang

1 Februari 2022

For a service robot to serve travelers at an airport or for a social robot to live with a human partner at home, it is vital for robots to possess the ability to empathize with human partners and express congruent emotions accordingly. We conducted a systematic review of the literature regarding empathy in interpersonal, virtual agents, and social robots research with inclusion criteria to analyze empirical studies in a peer-reviewed journal, conference proceeding, or a thesis. Based on the review, we define empathy for human–robot interaction (HRI) as the robot’s (observer) capability and process to recognize the human’s (target) emotional state, thoughts, and situation, and produce affective or cognitive responses to elicit a positive perception of humans. We reviewed all prominent empathy theories and established a conceptual framework that illuminates critical components to consider when designing an empathic robot, including the empathy process, outcome, and the observer and target characteristics. This model is complemented by empirical research involving empathic virtual agents and social robots. We suggest critical factors such as domain dependency, multi-modality, and empathy modulation to consider when designing, engineering, and researching empathic social robots.

AI can help people feel heard, but an AI label diminishes this impact

Cheryl J Wakslak Yidan Yin Nan Jia

29 Maret 2024

Significance As AI becomes more embedded in daily life, understanding its potential and limitations in meeting human psychological needs becomes more pertinent. Our research explores the fundamental human desire to “feel heard.” It reveals that while AI can generate responses that make people feel heard, individuals feel more heard when they believe a response comes from a fellow human. These findings highlight the potential of AI to augment human capacity for understanding and communication while also raising important conceptual questions about the meaning of being heard, as well as practical questions about how to best to leverage AI’s capabilities to support greater human flourishing.

ChatGPT: perspectives from human–computer interaction and psychology

Jiaxi Liu

18 Juni 2024

The release of GPT-4 has garnered widespread attention across various fields, signaling the impending widespread adoption and application of Large Language Models (LLMs). However, previous research has predominantly focused on the technical principles of ChatGPT and its social impact, overlooking its effects on human–computer interaction and user psychology. This paper explores the multifaceted impacts of ChatGPT on human–computer interaction, psychology, and society through a literature review. The author investigates ChatGPT’s technical foundation, including its Transformer architecture and RLHF (Reinforcement Learning from Human Feedback) process, enabling it to generate human-like responses. In terms of human–computer interaction, the author studies the significant improvements GPT models bring to conversational interfaces. The analysis extends to psychological impacts, weighing the potential of ChatGPT to mimic human empathy and support learning against the risks of reduced interpersonal connections. In the commercial and social domains, the paper discusses the applications of ChatGPT in customer service and social services, highlighting the improvements in efficiency and challenges such as privacy issues. Finally, the author offers predictions and recommendations for ChatGPT’s future development directions and its impact on social relationships.

Deceptive AI Ecosystems: The Case of ChatGPT

Ştefan Sarkadi Yifan Xu Xiao Zhan

18 Juni 2023

ChatGPT, an AI chatbot, has gained popularity for its capability in generating human-like responses. However, this feature carries several risks, most notably due to its deceptive behaviour such as offering users misleading or fabricated information that could further cause ethical issues. To better understand the impact of ChatGPT on our social, cultural, economic, and political interactions, it is crucial to investigate how ChatGPT operates in the real world where various societal pressures influence its development and deployment. This paper emphasizes the need to study ChatGPT "in the wild", as part of the ecosystem it is embedded in, with a strong focus on user involvement. We examine the ethical challenges stemming from ChatGPT’s deceptive human-like interactions and propose a roadmap for developing more transparent and trustworthy chatbots. Central to our approach is the importance of proactive risk assessment and user participation in shaping the future of chatbot technology.

Daftar Referensi

0 referensi

Tidak ada referensi ditemukan.

Artikel yang Mensitasi

0 sitasi

Tidak ada artikel yang mensitasi.