DOI: 10.1007/s12369-022-00911-z
Terbit pada 18 Agustus 2022 Pada International Journal of Social Robotics

Politeness in Human–Robot Interaction: A Multi-Experiment Study with Non-Humanoid Robots

Eliran Itzhak Galit Nimrod N. Tractinsky + 3 penulis

Abstrak

We studied politeness in human–robot interaction based on Lakoff’s politeness theory. In a series of eight studies, we manipulated three different levels of politeness of non-humanoid robots and evaluated their effects. A table-setting task was developed for two different types of robots (a robotic manipulator and a mobile robot). The studies included two different populations (old and young adults) and were conducted in two conditions (video and live). Results revealed that polite robot behavior positively affected users' perceptions of the interaction with the robots and that participants were able to differentiate between the designed politeness levels. Participants reported higher levels of enjoyment, satisfaction, and trust when they interacted with the politest behavior of the robot. A smaller number of young adults trusted the politest behavior of the robot compared to old adults. Enjoyment and trust of the interaction with the robot were higher when study participants were subjected to the live condition compared to video and participants were more satisfied when they interacted with a mobile robot compared to a manipulator.

Artikel Ilmiah Terkait

Cultural Differences in Indirect Speech Act Use and Politeness in Human-Robot Interaction

Jong-suk Choi Sukyung Seok Eunjin Hwang + 1 lainnya

7 Maret 2022

How do native English speakers and native Korean speakers politely make a request to a robot? Previous human-robot interaction studies on English have demonstrated that humans use indirect speech acts (ISAs) frequently to robots to make their requests polite. However, it is unknown whether humans considerably used ISAs to robots in other languages. In addition to ISAs, Korean has other politeness expressions called honorifics, which indicate different politeness from that of ISAs. This study aimed to investigate the cultural differences in humans' politeness expressions and politeness when they make requests to robots and to re-examine the effect of conventionality of context on the use of politeness expressions. We conducted a replication experiment of Williams et al. (2018) on native Korean speakers and analyzed their use of ISAs and honorifics. Our results showed that ISAs are rarely used in task-based human-robot interaction in Korean. Instead, honorifics are more frequently used than ISAs and are more common in conventionalized contexts than in unconventionalized contexts. These results suggest that the difference in politeness expressions and politeness between English and Korean exist in both human-robot interaction and human-human interaction. Furthermore, the conventionality of context has a strong constraint on making humans follow social norms in both languages.

A Taxonomy of Social Errors in Human-Robot Interaction

Leimin Tian S. Oviatt

9 Februari 2021

Robotic applications have entered various aspects of our lives, such as health care and educational services. In such Human-robot Interaction (HRI), trust and mutual adaption are established and maintained through a positive social relationship between a user and a robot. This social relationship relies on the perceived competence of a robot on the social-emotional dimension. However, because of technical limitations and user heterogeneity, current HRI is far from error-free, especially when a system leaves controlled lab environments and is applied to in-the-wild conditions. Errors in HRI may either degrade a user’s perception of a robot’s capability in achieving a task (defined as performance errors in this work) or degrade a user’s perception of a robot’s socio-affective competence (defined as social errors in this work). The impact of these errors and effective strategies to handle such an impact remains an open question. We focus on social errors in HRI in this work. In particular, we identify the major attributes of perceived socio-affective competence by reviewing human social interaction studies and HRI error studies. This motivates us to propose a taxonomy of social errors in HRI. We then discuss the impact of social errors situated in three representative HRI scenarios. This article provides foundations for a systematic analysis of the social-emotional dimension of HRI. The proposed taxonomy of social errors encourages the development of user-centered HRI systems, designed to offer positive and adaptive interaction experiences and improved interaction outcomes.

Relationship Development with Humanoid Social Robots: Applying Interpersonal Theories to Human-Robot Interaction

Jesse Fox Andrew Gambino

11 Januari 2021

Humanoid social robots (HSRs) are human-made technologies that can take physical or digital form, resemble people in form or behavior to some degree, and are designed to interact with people. A common assumption is that social robots can and should mimic humans, such that human-robot interaction (HRI) closely resembles human-human (i.e., interpersonal) interaction. Research is often framed from the assumption that rules and theories that apply to interpersonal interaction should apply to HRI (e.g., the computers are social actors framework). Here, we challenge these assumptions and consider more deeply the relevance and applicability of our knowledge about personal relationships to relationships with social robots. First, we describe the typical characteristics of HSRs available to consumers currently, elaborating characteristics relevant to understanding social interactions with robots such as form anthropomorphism and behavioral anthropomorphism. We also consider common social affordances of modern HSRs (persistence, personalization, responsiveness, contingency, and conversational control) and how these align with human capacities and expectations. Next, we present predominant interpersonal theories whose primary claims are foundational to our understanding of human relationship development (social exchange theories, including resource theory, interdependence theory, equity theory, and social penetration theory). We consider whether interpersonal theories are viable frameworks for studying HRI and human-robot relationships given their theoretical assumptions and claims. We conclude by providing suggestions for researchers and designers, including alternatives to equating human-robot relationships to human-human relationships.

Human-robot interaction: the impact of robotic aesthetics on anticipated human trust

P. Newbury Joel Pinney F. Carroll

14 Januari 2022

Background Human senses have evolved to recognise sensory cues. Beyond our perception, they play an integral role in our emotional processing, learning, and interpretation. They are what help us to sculpt our everyday experiences and can be triggered by aesthetics to form the foundations of our interactions with each other and our surroundings. In terms of Human-Robot Interaction (HRI), robots have the possibility to interact with both people and environments given their senses. They can offer the attributes of human characteristics, which in turn can make the interchange with technology a more appealing and admissible experience. However, for many reasons, people still do not seem to trust and accept robots. Trust is expressed as a person’s ability to accept the potential risks associated with participating alongside an entity such as a robot. Whilst trust is an important factor in building relationships with robots, the presence of uncertainties can add an additional dimension to the decision to trust a robot. In order to begin to understand how to build trust with robots and reverse the negative ideology, this paper examines the influences of aesthetic design techniques on the human ability to trust robots. Method This paper explores the potential that robots have unique opportunities to improve their facilities for empathy, emotion, and social awareness beyond their more cognitive functionalities. Through conducting an online questionnaire distributed globally, we explored participants ability and acceptance in trusting the Canbot U03 robot. Participants were presented with a range of visual questions which manipulated the robot’s facial screen and asked whether or not they would trust the robot. A selection of questions aimed at putting participants in situations where they were required to establish whether or not to trust a robot’s responses based solely on the visual appearance. We accomplished this by manipulating different design elements of the robots facial and chest screens, which influenced the human-robot interaction. Results We found that certain facial aesthetics seem to be more trustworthy than others, such as a cartoon face versus a human face, and that certain visual variables (i.e., blur) afforded uncertainty more than others. Consequentially, this paper reports that participant’s uncertainties of the visualisations greatly influenced their willingness to accept and trust the robot. The results of introducing certain anthropomorphic characteristics emphasised the participants embrace of the uncanny valley theory, where pushing the degree of human likeness introduced a thin line between participants accepting robots and not. By understanding what manipulation of design elements created the aesthetic effect that triggered the affective processes, this paper further enriches our knowledge of how we might design for certain emotions, feelings, and ultimately more socially acceptable and trusting robotic experiences.

Does Self-Disclosing to a Robot Induce Liking for the Robot? Testing the Disclosure and Liking Hypotheses in Human–Robot Interaction

Yuheng Wu Shuyi Pan Lin Zhang + 2 lainnya

8 Januari 2023

Abstract When someone intimately discloses themselves to a robot, does that make them like the robot more? Does a robot’s reciprocal disclosure contribute to a human’s liking of the robot? To explore whether these disclosure-liking effects in human–human interaction also apply to human–robot interaction, we conducted a between-subjects lab experiment to examine how self-disclosure intimacy (intimate vs. non-intimate) and reciprocal self-disclosure (yes vs. no) from the robot influence participants’ social perceptions (i.e., likability, trustworthiness, and social attraction) toward the robot. None of the disclosure-liking effects were confirmed by the results. In contrast, reciprocal self-disclosure from the robot increased liking in intimate self-disclosure but decreased liking in non-intimate self-disclosure, indicating a crossover interaction effect on likability. A post-hoc analysis was conducted to further understand these patterns. Implications in terms of the computers are social actors (CASA) paradigm were discussed.

Daftar Referensi

0 referensi

Tidak ada referensi ditemukan.

Artikel yang Mensitasi

0 sitasi

Tidak ada artikel yang mensitasi.