Does Self-Disclosing to a Robot Induce Liking for the Robot? Testing the Disclosure and Liking Hypotheses in Human–Robot Interaction
Abstrak
Abstract When someone intimately discloses themselves to a robot, does that make them like the robot more? Does a robot’s reciprocal disclosure contribute to a human’s liking of the robot? To explore whether these disclosure-liking effects in human–human interaction also apply to human–robot interaction, we conducted a between-subjects lab experiment to examine how self-disclosure intimacy (intimate vs. non-intimate) and reciprocal self-disclosure (yes vs. no) from the robot influence participants’ social perceptions (i.e., likability, trustworthiness, and social attraction) toward the robot. None of the disclosure-liking effects were confirmed by the results. In contrast, reciprocal self-disclosure from the robot increased liking in intimate self-disclosure but decreased liking in non-intimate self-disclosure, indicating a crossover interaction effect on likability. A post-hoc analysis was conducted to further understand these patterns. Implications in terms of the computers are social actors (CASA) paradigm were discussed.
Artikel Ilmiah Terkait
Jesse Fox Andrew Gambino
11 Januari 2021
Humanoid social robots (HSRs) are human-made technologies that can take physical or digital form, resemble people in form or behavior to some degree, and are designed to interact with people. A common assumption is that social robots can and should mimic humans, such that human-robot interaction (HRI) closely resembles human-human (i.e., interpersonal) interaction. Research is often framed from the assumption that rules and theories that apply to interpersonal interaction should apply to HRI (e.g., the computers are social actors framework). Here, we challenge these assumptions and consider more deeply the relevance and applicability of our knowledge about personal relationships to relationships with social robots. First, we describe the typical characteristics of HSRs available to consumers currently, elaborating characteristics relevant to understanding social interactions with robots such as form anthropomorphism and behavioral anthropomorphism. We also consider common social affordances of modern HSRs (persistence, personalization, responsiveness, contingency, and conversational control) and how these align with human capacities and expectations. Next, we present predominant interpersonal theories whose primary claims are foundational to our understanding of human relationship development (social exchange theories, including resource theory, interdependence theory, equity theory, and social penetration theory). We consider whether interpersonal theories are viable frameworks for studying HRI and human-robot relationships given their theoretical assumptions and claims. We conclude by providing suggestions for researchers and designers, including alternatives to equating human-robot relationships to human-human relationships.
Eliran Itzhak Galit Nimrod N. Tractinsky + 3 lainnya
18 Agustus 2022
We studied politeness in human–robot interaction based on Lakoff’s politeness theory. In a series of eight studies, we manipulated three different levels of politeness of non-humanoid robots and evaluated their effects. A table-setting task was developed for two different types of robots (a robotic manipulator and a mobile robot). The studies included two different populations (old and young adults) and were conducted in two conditions (video and live). Results revealed that polite robot behavior positively affected users' perceptions of the interaction with the robots and that participants were able to differentiate between the designed politeness levels. Participants reported higher levels of enjoyment, satisfaction, and trust when they interacted with the politest behavior of the robot. A smaller number of young adults trusted the politest behavior of the robot compared to old adults. Enjoyment and trust of the interaction with the robot were higher when study participants were subjected to the live condition compared to video and participants were more satisfied when they interacted with a mobile robot compared to a manipulator.
P. Newbury Joel Pinney F. Carroll
14 Januari 2022
Background Human senses have evolved to recognise sensory cues. Beyond our perception, they play an integral role in our emotional processing, learning, and interpretation. They are what help us to sculpt our everyday experiences and can be triggered by aesthetics to form the foundations of our interactions with each other and our surroundings. In terms of Human-Robot Interaction (HRI), robots have the possibility to interact with both people and environments given their senses. They can offer the attributes of human characteristics, which in turn can make the interchange with technology a more appealing and admissible experience. However, for many reasons, people still do not seem to trust and accept robots. Trust is expressed as a person’s ability to accept the potential risks associated with participating alongside an entity such as a robot. Whilst trust is an important factor in building relationships with robots, the presence of uncertainties can add an additional dimension to the decision to trust a robot. In order to begin to understand how to build trust with robots and reverse the negative ideology, this paper examines the influences of aesthetic design techniques on the human ability to trust robots. Method This paper explores the potential that robots have unique opportunities to improve their facilities for empathy, emotion, and social awareness beyond their more cognitive functionalities. Through conducting an online questionnaire distributed globally, we explored participants ability and acceptance in trusting the Canbot U03 robot. Participants were presented with a range of visual questions which manipulated the robot’s facial screen and asked whether or not they would trust the robot. A selection of questions aimed at putting participants in situations where they were required to establish whether or not to trust a robot’s responses based solely on the visual appearance. We accomplished this by manipulating different design elements of the robots facial and chest screens, which influenced the human-robot interaction. Results We found that certain facial aesthetics seem to be more trustworthy than others, such as a cartoon face versus a human face, and that certain visual variables (i.e., blur) afforded uncertainty more than others. Consequentially, this paper reports that participant’s uncertainties of the visualisations greatly influenced their willingness to accept and trust the robot. The results of introducing certain anthropomorphic characteristics emphasised the participants embrace of the uncanny valley theory, where pushing the degree of human likeness introduced a thin line between participants accepting robots and not. By understanding what manipulation of design elements created the aesthetic effect that triggered the affective processes, this paper further enriches our knowledge of how we might design for certain emotions, feelings, and ultimately more socially acceptable and trusting robotic experiences.
Oren Zuckerman Elior Carsenti H. Erel
7 Maret 2022
We evaluate whether an interaction with robots can influence a subsequent Human-Human Interaction without the robots' presence. Social psychology studies indicate that some social experiences have a carryover effect, leading to implicit influences on later interactions. We tested whether a social experience formed in a Human-Robot Interaction can have a carryover effect that impacts a subsequent Human-Human Interaction. We focused on ostracism, a phenomenon known to involve carryover effects that lead to prosocial behavior. Using the Robotic Ostracism Paradigm, we compared two HRI experiences: Exclusion and Inclusion, testing their impact on a Human-Human Interaction that did not involve robots. Robotic ostracism had a carryover effect that led to prosocial behavior in the Human-Human Interaction, whereby participants preferred intimate interpersonal space and displayed increased compliance. We conclude that HRI experiences may involve carryover effects that extend beyond the interaction with robots, impacting separate and different subsequent strictly human Interactions.
Sung Park Mincheol Whang
1 Februari 2022
For a service robot to serve travelers at an airport or for a social robot to live with a human partner at home, it is vital for robots to possess the ability to empathize with human partners and express congruent emotions accordingly. We conducted a systematic review of the literature regarding empathy in interpersonal, virtual agents, and social robots research with inclusion criteria to analyze empirical studies in a peer-reviewed journal, conference proceeding, or a thesis. Based on the review, we define empathy for human–robot interaction (HRI) as the robot’s (observer) capability and process to recognize the human’s (target) emotional state, thoughts, and situation, and produce affective or cognitive responses to elicit a positive perception of humans. We reviewed all prominent empathy theories and established a conceptual framework that illuminates critical components to consider when designing an empathic robot, including the empathy process, outcome, and the observer and target characteristics. This model is complemented by empirical research involving empathic virtual agents and social robots. We suggest critical factors such as domain dependency, multi-modality, and empathy modulation to consider when designing, engineering, and researching empathic social robots.
Daftar Referensi
0 referensiTidak ada referensi ditemukan.
Artikel yang Mensitasi
0 sitasiTidak ada artikel yang mensitasi.