Toward supporting quality alt text in computing publications
Abstrak
While researchers have examined alternative (alt) text for social media and news contexts, few have studied the status and challenges for authoring alt text of figures in computing-related publications. These figures are distinct, often conveying dense visual information, and may necessitate unique accessibility solutions. Accordingly, we explored how to support authors in creating alt text in computing publications---specifically in the field of human-computer interaction (HCI). We conducted two studies: (1) an analysis of 300 recently published figures at a general HCI conference (ACM CHI), and (2) interviews with 10 researchers in HCI and related fields who have varying levels of experience writing alt text. Our findings characterize the prevalence, quality, and patterns of recent figure alt text and captions. We further identify challenges authors encounter, describing their workflow barriers and confusions around how to compose alt text for complex figures. We conclude by outlining a research agenda on process, education, and tooling opportunities to improve alt text in computing-related publications.
Artikel Ilmiah Terkait
Nils Rodrigues Nelusa Pathmanathan Seyda Öney + 5 lainnya
27 April 2022
We present an exploratory study on the accessibility of images in publications when viewed with color vision deficiencies (CVDs). The study is based on 1,710 images sampled from a visualization dataset (VIS30K) over five years. We simulated four CVDs on each image. First, four researchers (one with a CVD) identified existing issues and helpful aspects in a subset of the images. Based on the resulting labels, 200 crowdworkers provided 30,000 ratings on present CVD issues in the simulated images. We analyzed this data for correlations, clusters, trends, and free text comments to gain a first overview of paper figure accessibility. Overall, about 60 % of the images were rated accessible. Furthermore, our study indicates that accessibility issues are subjective and hard to detect. On a meta-level, we reflect on our study experience to point out challenges and opportunities of large-scale accessibility studies for future research directions.
John Tang Edward Cutrell Martez E. Mott
19 April 2023
Profile pictures can convey rich social signals that are often inaccessible to blind and low vision screen reader users. Although there have been efforts to understand screen reader users’ preferences for alternative (alt) text descriptions when encountering images online, profile pictures evoke distinct information needs. We conducted semi-structured interviews with 16 screen reader users to understand their preferences for various styles of profile picture image descriptions in different social contexts. We also interviewed seven sighted individuals to explore their thoughts on authoring alt text for profile pictures. Our findings suggest that detailed image descriptions and user narrated alt text can provide screen reader users enjoyable and informative experiences when exploring profile pictures. We also identified mismatches between how sighted individuals would author alt text with what screen reader users prefer to know about profile pictures. We discuss the implications of our findings for social applications that support profile pictures.
Juho Kim Mina Huh Dasom Choi + 3 lainnya
29 April 2022
Webtoon is a type of digital comics read online where readers can leave comments to share their thoughts on the story. While it has experienced a surge in popularity internationally, people with visual impairments cannot enjoy webtoon with the lack of an accessible format. While traditional image description practices can be adopted, resulting descriptions cannot preserve webtoons’ unique values such as control over the reading pace and social engagement through comments. To improve the webtoon reading experience for BLV users, we propose Cocomix, an interactive webtoon reader that leverages comments into the design of novel webtoon interactions. Since comments can identify story highlights and provide additional context, we designed a system that provides 1) comments-based adaptive descriptions with selective access to details and 2) panel-anchored comments for easy access to relevant descriptive comments. Our evaluation (N=12) showed that Cocomix users could adapt the description for various needs and better utilize comments.
R. Menzies Garreth W. Tigwell Benjamin M. Gorman
21 April 2020
Emoji are graphical symbols that appear in many aspects of our lives. Worldwide, around 36 million people are blind and 217 million have a moderate to severe visual impairment. This portion of the population may use and encounter emoji, yet it is unclear what accessibility challenges emoji introduce. We first conducted an online survey with 58 visually impaired participants to understand how they use and encounter emoji online, and the challenges they experience. We then conducted 11 interviews with screen reader users to understand more about the challenges reported in our survey findings. Our interview findings demonstrate that technology is both an enabler and a barrier, emoji descriptors can hinder communication, and therefore the use of emoji impacts social interaction. Using our findings from both studies, we propose best practice when using emoji and recommendations to improve the future accessibility of emoji for visually impaired people.
Caluã de Lacerda Pataca Matthew Watkins Matt Huenerfauth + 2 lainnya
19 April 2023
Speech is expressive in ways that caption text does not capture, with emotion or emphasis information not conveyed. We interviewed eight Deaf and Hard-of-Hearing (dhh) individuals to understand if and how captions’ inexpressiveness impacts them in online meetings with hearing peers. Automatically captioned speech, we found, lacks affective depth, lending it a hard-to-parse ambiguity and general dullness. Interviewees regularly feel excluded, which some understand is an inherent quality of these types of meetings rather than a consequence of current caption text design. Next, we developed three novel captioning models that depicted, beyond words, features from prosody, emotions, and a mix of both. In an empirical study, 16 dhh participants compared these models with conventional captions. The emotion-based model outperformed traditional captions in depicting emotions and emphasis, with only a moderate loss in legibility, suggesting its potential as a more inclusive design for captions.
Daftar Referensi
0 referensiTidak ada referensi ditemukan.
Artikel yang Mensitasi
0 sitasiTidak ada artikel yang mensitasi.