DOI: 10.1145/3411764.3445572
Terbit pada 26 Maret 2021 Pada International Conference on Human Factors in Computing Systems

Say It All: Feedback for Improving Non-Visual Presentation Accessibility

Jeffrey P. Bigham Yi-Hao Peng JiWoong Jang + 1 penulis

Abstrak

Presenters commonly use slides as visual aids for informative talks. When presenters fail to verbally describe the content on their slides, blind and visually impaired audience members lose access to necessary content, making the presentation difficult to follow. Our analysis of 90 presentation videos revealed that 72% of 610 visual elements (e.g., images, text) were insufficiently described. To help presenters create accessible presentations, we introduce Presentation A11y, a system that provides real-time and post-presentation accessibility feedback. Our system analyzes visual elements on the slide and the transcript of the verbal presentation to provide element-level feedback on what visual content needs to be further described or even removed. Presenters using our system with their own slide-based presentations described more of the content on their slides, and identified 3.26 times more accessibility problems to fix after the talk than when using a traditional slide-based presentation interface. Integrating accessibility feedback into content creation tools will improve the accessibility of informational content for all.

Artikel Ilmiah Terkait

“It’s Kind of Context Dependent”: Understanding Blind and Low Vision People’s Video Accessibility Preferences Across Viewing Scenarios

Crescentia Jung Shiri Azenkot Abigale Stangl + 2 lainnya

16 Maret 2024

While audio description (AD) is the standard approach for making videos accessible to blind and low vision (BLV) people, existing AD guidelines do not consider BLV users’ varied preferences across viewing scenarios. These scenarios range from how-to videos on YouTube, where users seek to learn new skills, to historical dramas on Netflix, where a user’s goal is entertainment. Additionally, the increase in video watching on mobile devices provides an opportunity to integrate nonverbal output modalities (e.g., audio cues, tactile elements, and visual enhancements). Through a formative survey and 15 semi-structured interviews, we identified BLV people’s video accessibility preferences across diverse scenarios. For example, participants valued action and equipment details for how-to videos, tactile graphics for learning scenarios, and 3D models for fantastical content. We define a six-dimensional video accessibility design space to guide future innovation and discuss how to move from “one-size-fits-all” paradigms to scenario-specific approaches.

Co11ab: Augmenting Accessibility in Synchronous Collaborative Writing for People with Vision Impairments

Darren Gergle Anne Marie Piper T. B. McHugh + 1 lainnya

29 April 2022

Collaborative writing is an integral part of academic and professional work. Although some prior research has focused on accessibility in collaborative writing, we know little about how visually impaired writers work in real-time with sighted collaborators or how online editing tools could better support their work. Grounded in formative interviews and observations with eight screen reader users, we built Co11ab, a Google Docs extension that provides configurable audio cues to facilitate understanding who is editing (or edited) what and where in a shared document. Results from a design exploration with fifteen screen reader users, including three naturalistic sessions of use with sighted colleagues, reveal how screen reader users understand various auditory representations and use them to coordinate real-time collaborative writing. We revisit what collaboration awareness means for screen reader users and discuss design considerations for future systems.

Visualization of Speech Prosody and Emotion in Captions: Accessibility for Deaf and Hard-of-Hearing Users

Caluã de Lacerda Pataca Matthew Watkins Matt Huenerfauth + 2 lainnya

19 April 2023

Speech is expressive in ways that caption text does not capture, with emotion or emphasis information not conveyed. We interviewed eight Deaf and Hard-of-Hearing (dhh) individuals to understand if and how captions’ inexpressiveness impacts them in online meetings with hearing peers. Automatically captioned speech, we found, lacks affective depth, lending it a hard-to-parse ambiguity and general dullness. Interviewees regularly feel excluded, which some understand is an inherent quality of these types of meetings rather than a consequence of current caption text design. Next, we developed three novel captioning models that depicted, beyond words, features from prosody, emotions, and a mix of both. In an empirical study, 16 dhh participants compared these models with conventional captions. The emotion-based model outperformed traditional captions in depicting emotions and emphasis, with only a moderate loss in legibility, suggesting its potential as a more inclusive design for captions.

Community-Driven Information Accessibility: Online Sign Language Content Creation within d/Deaf Communities

Xin Tong Xiang Chang Yingjie (MaoMao) Ni + 3 lainnya

19 April 2023

Information access is one of the most significant challenges faced by d/Deaf signers due to a lack of sign language information. Given the challenges in machine-driven solutions, we seek to understand how d/Deaf communities can support the growth of sign language content. Based on interviews with 12 d/Deaf people in China, we found that d/Deaf videos, i.e., sign language videos created by and for d/Deaf people, can be crucial information sources and educational materials. Combining content analysis of 360 d/Deaf videos to better understand this type of video, we show how d/Deaf communities co-create information accessibility through collaboration in content creation online. We uncover two major challenges that creators need to address, e.g., difficulties in interpretation and inconsistent content qualities. We propose potential design opportunities and future research directions to support d/Deaf people’s needs for sign language content through collaboration within d/Deaf communities.

AXNav: Replaying Accessibility Tests from Natural Language

Maryam Taeb Ruijia Cheng E. Schoop + 3 lainnya

3 Oktober 2023

Developers and quality assurance testers often rely on manual testing to test accessibility features throughout the product lifecycle. Unfortunately, manual testing can be tedious, often has an overwhelming scope, and can be difficult to schedule amongst other development milestones. Recently, Large Language Models (LLMs) have been used for a variety of tasks including automation of UIs. However, to our knowledge, no one has yet explored the use of LLMs in controlling assistive technologies for the purposes of supporting accessibility testing. In this paper, we explore the requirements of a natural language based accessibility testing workflow, starting with a formative study. From this we build a system that takes a manual accessibility test instruction in natural language (e.g., “Search for a show in VoiceOver”) as input and uses an LLM combined with pixel-based UI Understanding models to execute the test and produce a chaptered, navigable video. In each video, to help QA testers, we apply heuristics to detect and flag accessibility issues (e.g., Text size not increasing with Large Text enabled, VoiceOver navigation loops). We evaluate this system through a 10-participant user study with accessibility QA professionals who indicated that the tool would be very useful in their current work and performed tests similarly to how they would manually test the features. The study also reveals insights for future work on using LLMs for accessibility testing.

Daftar Referensi

0 referensi

Tidak ada referensi ditemukan.

Artikel yang Mensitasi

0 sitasi

Tidak ada artikel yang mensitasi.