DOI: 10.1145/3544548.3581511
Terbit pada 19 April 2023 Pada International Conference on Human Factors in Computing Systems

Visualization of Speech Prosody and Emotion in Captions: Accessibility for Deaf and Hard-of-Hearing Users

Caluã de Lacerda Pataca Matthew Watkins Matt Huenerfauth + 2 penulis

Abstrak

Speech is expressive in ways that caption text does not capture, with emotion or emphasis information not conveyed. We interviewed eight Deaf and Hard-of-Hearing (dhh) individuals to understand if and how captions’ inexpressiveness impacts them in online meetings with hearing peers. Automatically captioned speech, we found, lacks affective depth, lending it a hard-to-parse ambiguity and general dullness. Interviewees regularly feel excluded, which some understand is an inherent quality of these types of meetings rather than a consequence of current caption text design. Next, we developed three novel captioning models that depicted, beyond words, features from prosody, emotions, and a mix of both. In an empirical study, 16 dhh participants compared these models with conventional captions. The emotion-based model outperformed traditional captions in depicting emotions and emphasis, with only a moderate loss in legibility, suggesting its potential as a more inclusive design for captions.

Artikel Ilmiah Terkait

Understanding and Enhancing The Role of Speechreading in Online d/DHH Communication Accessibility

Richard E. Ladner Aashaka Desai Jennifer Mankoff

19 April 2023

Speechreading is the art of using visual and contextual cues in the environment to support listening. Often used by d/Deaf and Hard-of-Hearing (d/DHH) individuals, it highlights nuances of rich communication. However, lived experiences of speechreaders are underdocumented in HCI literature, and the impact of online environments and interactions of captioning with speechreading has not been explored in depth. We bridge these gaps through a three-part study consisting of formative interviews, design probes, and design sessions with 12 d/DHH individuals who speechread. Our primary contribution is to understand the lived experience of speechreading in online communication, and thus to better understand the richness and variety of techniques d/DHH individuals use to provision access. We highlight technical, environmental and sociocultural factors that impact communication accessibility, explore the design space of speechreading supports and share considerations for the design future of speechreading technology.

Live Captions in Virtual Reality (VR)

R. Kushalnagar Dawson Franz Christian Vogler + 2 lainnya

26 Oktober 2022

Few VR applications and games implement captioning of speech and audio cues, which either inhibits or prevents access of their application by deaf or hard of hearing (DHH) users, new language learners, and other caption users. Additionally, little to no guidelines exist on how to implement live captioning on VR headsets and how it may differ from traditional television captioning. To help fill the void of information behind user preferences of different VR captioning styles, we conducted a study with eight DHH participants to test three caption movement behaviors (headlocked, lag, and appear) while watching live-captioned, single-speaker presentations in VR. Participants answered a series of Likert scale and open-ended questions about their experience. Participant preferences were split, but the majority of participants reported feeling comfortable with using live captions in VR and enjoyed the experience. When participants ranked the caption behaviors, there was almost an equal divide between the three types tested. IPQ results indicated each behavior had similar immersion ratings, however participants found headlocked and lag captions more user-friendly than appear captions. We suggest that participants may vary in caption preference depending on how they use captions, and that providing opportunities for caption customization is best.

Community-Driven Information Accessibility: Online Sign Language Content Creation within d/Deaf Communities

Xin Tong Xiang Chang Yingjie (MaoMao) Ni + 3 lainnya

19 April 2023

Information access is one of the most significant challenges faced by d/Deaf signers due to a lack of sign language information. Given the challenges in machine-driven solutions, we seek to understand how d/Deaf communities can support the growth of sign language content. Based on interviews with 12 d/Deaf people in China, we found that d/Deaf videos, i.e., sign language videos created by and for d/Deaf people, can be crucial information sources and educational materials. Combining content analysis of 360 d/Deaf videos to better understand this type of video, we show how d/Deaf communities co-create information accessibility through collaboration in content creation online. We uncover two major challenges that creators need to address, e.g., difficulties in interpretation and inconsistent content qualities. We propose potential design opportunities and future research directions to support d/Deaf people’s needs for sign language content through collaboration within d/Deaf communities.

Say It All: Feedback for Improving Non-Visual Presentation Accessibility

Jeffrey P. Bigham Yi-Hao Peng JiWoong Jang + 1 lainnya

26 Maret 2021

Presenters commonly use slides as visual aids for informative talks. When presenters fail to verbally describe the content on their slides, blind and visually impaired audience members lose access to necessary content, making the presentation difficult to follow. Our analysis of 90 presentation videos revealed that 72% of 610 visual elements (e.g., images, text) were insufficiently described. To help presenters create accessible presentations, we introduce Presentation A11y, a system that provides real-time and post-presentation accessibility feedback. Our system analyzes visual elements on the slide and the transcript of the verbal presentation to provide element-level feedback on what visual content needs to be further described or even removed. Presenters using our system with their own slide-based presentations described more of the content on their slides, and identified 3.26 times more accessibility problems to fix after the talk than when using a traditional slide-based presentation interface. Integrating accessibility feedback into content creation tools will improve the accessibility of informational content for all.

Supporting Accessible Data Visualization Through Audio Data Narratives

Sile O'Modhrain Gene S-H Kim Sean Follmer + 1 lainnya

29 April 2022

Online data visualizations play an important role in informing public opinion but are often inaccessible to screen reader users. To address the need for accessible data representations on the web that provide direct, multimodal, and up-to-date access to the data, we investigate audio data narratives –which combine textual descriptions and sonification (the mapping of data to non-speech sounds). We conduct two co-design workshops with screen reader users to define design principles that guide the structure, content, and duration of a data narrative. Based on these principles and relevant auditory processing characteristics, we propose a dynamic programming approach to automatically generate an audio data narrative from a given dataset. We evaluate our approach with 16 screen reader users. Findings show with audio narratives, users gain significantly more insights from the data. Users describe data narratives help them better extract and comprehend the information in both the sonification and description.

Daftar Referensi

0 referensi

Tidak ada referensi ditemukan.

Artikel yang Mensitasi

0 sitasi

Tidak ada artikel yang mensitasi.