DOI: 10.1080/10447318.2020.1741118
Terbit pada 10 Februari 2020 Pada International journal of human computer interactions

Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy

B. Shneiderman

Abstrak

ABSTRACT Well-designed technologies that offer high levels of human control and high levels of computer automation can increase human performance, leading to wider adoption. The Human-Centered Artificial Intelligence (HCAI) framework clarifies how to (1) design for high levels of human control and high levels of computer automation so as to increase human performance, (2) understand the situations in which full human control or full computer control are necessary, and (3) avoid the dangers of excessive human control or excessive computer control. The methods of HCAI are more likely to produce designs that are Reliable, Safe & Trustworthy (RST). Achieving these goals will dramatically increase human performance, while supporting human self-efficacy, mastery, creativity, and responsibility.

Artikel Ilmiah Terkait

Six Human-Centered Artificial Intelligence Grand Challenges

Joe Kider Sean Koon M. López-González + 23 lainnya

2 Januari 2023

Abstract Widespread adoption of artificial intelligence (AI) technologies is substantially affecting the human condition in ways that are not yet well understood. Negative unintended consequences abound including the perpetuation and exacerbation of societal inequalities and divisions via algorithmic decision making. We present six grand challenges for the scientific community to create AI technologies that are human-centered, that is, ethical, fair, and enhance the human condition. These grand challenges are the result of an international collaboration across academia, industry and government and represent the consensus views of a group of 26 experts in the field of human-centered artificial intelligence (HCAI). In essence, these challenges advocate for a human-centered approach to AI that (1) is centered in human well-being, (2) is designed responsibly, (3) respects privacy, (4) follows human-centered design principles, (5) is subject to appropriate governance and oversight, and (6) interacts with individuals while respecting human’s cognitive capacities. We hope that these challenges and their associated research directions serve as a call for action to conduct research and development in AI that serves as a force multiplier towards more fair, equitable and sustainable societies.

Human-Centered Trustworthy Framework: A Human–Computer Interaction Perspective

J. Cravino D. Lamas Paulo Martins + 1 lainnya

5 Mei 2023

The proposed framework (Human-Centered Trustworthy Framework) provides a novel human–computer interaction approach to incorporate positive and meaningful trustful user experiences in the system design process. It helps to illustrate potential users' trust concerns in artificial intelligence and guides nonexperts to avoid designing vulnerable interactions that lead to breaches of trust.

Revising human-systems engineering principles for embedded AI applications

Jason Scott Metcalfe Laura Freeman M. Cummings

26 Januari 2023

The recent shift from predominantly hardware-based systems in complex settings to systems that heavily leverage non-deterministic artificial intelligence (AI) reasoning means that typical systems engineering processes must also adapt, especially when humans are direct or indirect users. Systems with embedded AI rely on probabilistic reasoning, which can fail in unexpected ways, and any overestimation of AI capabilities can result in systems with latent functionality gaps. This is especially true when humans oversee such systems, and such oversight has the potential to be deadly, but there is little-to-no consensus on how such system should be tested to ensure they can gracefully fail. To this end, this work outlines a roadmap for emerging research areas for complex human-centric systems with embedded AI. Fourteen new functional and tasks requirement considerations are proposed that highlight the interconnectedness between uncertainty and AI, as well as the role humans might need to play in the supervision and secure operation of such systems. In addition, 11 new and modified non-functional requirements, i.e., “ilities,” are provided and two new “ilities,” auditability and passive vulnerability, are also introduced. Ten problem areas with AI test, evaluation, verification and validation are noted, along with the need to determine reasonable risk estimates and acceptable thresholds for system performance. Lastly, multidisciplinary teams are needed for the design of effective and safe systems with embedded AI, and a new AI maintenance workforce should be developed for quality assurance of both underlying data and models.

A "User Experience 3.0 (UX 3.0)" Paradigm Framework: User Experience Design for Human-Centered AI Systems

Wei Xu

3 Maret 2024

The human-centered artificial intelligence (HCAI) design approach, the user-centered design (UCD) version in the intelligence era, has been promoted to address potential negative issues caused by AI technology; user experience design (UXD) is specifically called out to facilitate the design and development of human-centered AI systems. Over the last three decades, user experience (UX) practice can be divided into three stages in terms of technology platform, user needs, design philosophy, ecosystem, scope, focus, and methodology of UX practice. UX practice is moving towards the intelligence era. Still, the existing UX paradigm mainly aims at non-intelligent systems and lacks a systematic approach to address UX for designing and developing human-centered AI products and systems. The intelligence era has put forward new demands on the UX paradigm. This paper proposes a"UX 3.0"paradigm framework and the corresponding UX methodology for UX practice in the intelligence era. The"UX 3.0"paradigm framework includes four categories of emerging experiences in the intelligence era: ecosystem-based experience, innovation-enabled experience, AI-enabled experience, and human-AI interaction-based experience, each compelling us to enhance current UX practice in terms of design philosophy, scope, focus, and methodology. We believe that the"UX 3.0"paradigm helps enhance existing UX practice and provides methodological support for the research and applications of UX in developing human-centered AI systems. Finally, this paper looks forward to future work implementing the"UX 3.0"paradigm.

Quo vadis artificial intelligence?

Hao Luo Xiang Li Yuchen Jiang + 2 lainnya

7 Maret 2022

The study of artificial intelligence (AI) has been a continuous endeavor of scientists and engineers for over 65 years. The simple contention is that human-created machines can do more than just labor-intensive work; they can develop human-like intelligence. Being aware or not, AI has penetrated into our daily lives, playing novel roles in industry, healthcare, transportation, education, and many more areas that are close to the general public. AI is believed to be one of the major drives to change socio-economical lives. In another aspect, AI contributes to the advancement of state-of-the-art technologies in many fields of study, as helpful tools for groundbreaking research. However, the prosperity of AI as we witness today was not established smoothly. During the past decades, AI has struggled through historical stages with several winters. Therefore, at this juncture, to enlighten future development, it is time to discuss the past, present, and have an outlook on AI. In this article, we will discuss from a historical perspective how challenges were faced on the path of revolution of both the AI tools and the AI systems. Especially, in addition to the technical development of AI in the short to mid-term, thoughts and insights are also presented regarding the symbiotic relationship of AI and humans in the long run.

Daftar Referensi

0 referensi

Tidak ada referensi ditemukan.

Artikel yang Mensitasi

5 sitasi

A Survey of Emergencies Management Systems in Smart Cities

F. Vasques P. Portugal + 5 lainnya

2022

The rapid urbanization process in the last hundred of years has deeply changed the way we live and interact with each other. As most people now live in urban areas, cities are experiencing growing demands for more efficient and sustainable public services that may improve the perceived quality of life, specially with the anticipated impacts of climatic changes. In this already complex scenario with increasingly overcrowded urban areas, different types of emergency situations may happen anywhere and anytime, with unpredictable costs in human lives and properties damages. In order to cope with often unexpected and potentially dangerous emergencies, smart cities initiatives have been developed in different cities, addressing multiple aspects of emergencies detection, alerting, and mitigation. In this scenario, this article surveys recent smart city solutions for crisis management, proposing definitions for emergencies-oriented systems and classifying them according to the employed technologies and provided services. Additionally, recent developments in the domains of Internet of Things, Artificial Intelligence and Big Data are also highlighted when associated to the management of urban emergencies, potentially paving the way for new developments while classifying and organizing them according to different criteria. Finally, open research challenges will be identified, indicating promising trends and research directions for the coming years.

Expanding Explainability: Towards Social Transparency in AI systems

Upol Ehsan Michael J. Muller + 3 lainnya

12 Januari 2021

As AI-powered systems increasingly mediate consequential decision-making, their explainability is critical for end-users to take informed and accountable actions. Explanations in human-human interactions are socially-situated. AI systems are often socio-organizationally embedded. However, Explainable AI (XAI) approaches have been predominantly algorithm-centered. We take a developmental step towards socially-situated XAI by introducing and exploring Social Transparency (ST), a sociotechnically informed perspective that incorporates the socio-organizational context into explaining AI-mediated decision-making. To explore ST conceptually, we conducted interviews with 29 AI users and practitioners grounded in a speculative design scenario. We suggested constitutive design elements of ST and developed a conceptual framework to unpack ST’s effect and implications at the technical, decision-making, and organizational level. The framework showcases how ST can potentially calibrate trust in AI, improve decision-making, facilitate organizational collective actions, and cultivate holistic explainability. Our work contributes to the discourse of Human-Centered XAI by expanding the design space of XAI.

”Clay to Play With”: Generative AI Tools in UX and Industrial Design Practice

T. Jokela Antti Salovaara + 2 lainnya

1 Juli 2024

Generative artificial intelligence (GAI) is transforming numerous professions, not least various fields intimately relying on creativity, such as design. To explore GAI’s adoption and appropriation in design, an interview-based study probed 10 specialists in user experience and industrial design, with varying tenure and GAI experience, for their adoption/application of GAI tools, reasons for not using them, problems with ownership and agency, speculations about the future of creative work, and GAI tools’ roles in design sensemaking. Insight from reflexive thematic analysis revealed wide variation in attitudes toward GAI tools – from threat-oriented negative appraisals to identification of empowerment opportunities – which depended on the sense of agency and perceived control. The paper examines this finding in light of the Coping Model of User Adaptation and discusses designers’ metacognitive skills as possible underpinnings for their attitudes. Avenues for further research are identified accordingly.