Supporting Accessible Data Visualization Through Audio Data Narratives
Abstrak
Online data visualizations play an important role in informing public opinion but are often inaccessible to screen reader users. To address the need for accessible data representations on the web that provide direct, multimodal, and up-to-date access to the data, we investigate audio data narratives –which combine textual descriptions and sonification (the mapping of data to non-speech sounds). We conduct two co-design workshops with screen reader users to define design principles that guide the structure, content, and duration of a data narrative. Based on these principles and relevant auditory processing characteristics, we propose a dynamic programming approach to automatically generate an audio data narrative from a given dataset. We evaluate our approach with 16 screen reader users. Findings show with audio narratives, users gain significantly more insights from the data. Users describe data narratives help them better extract and comprehend the information in both the sonification and description.
Artikel Ilmiah Terkait
Crystal Lee Arvind Satyanarayan Alan Lundgard + 3 lainnya
10 Mei 2022
Current web accessibility guidelines ask visualization designers to support screen readers via basic non‐visual alternatives like textual descriptions and access to raw data tables. But charts do more than summarize data or reproduce tables; they afford interactive data exploration at varying levels of granularity—from fine‐grained datum‐by‐datum reading to skimming and surfacing high‐level trends. In response to the lack of comparable non‐visual affordances, we present a set of rich screen reader experiences for accessible data visualization and exploration. Through an iterative co‐design process, we identify three key design dimensions for expressive screen reader accessibility: structure, or how chart entities should be organized for a screen reader to traverse; navigation, or the structural, spatial, and targeted operations a user might perform to step through the structure; and, description, or the semantic content, composition, and verbosity of the screen reader's narration. We operationalize these dimensions to prototype screen‐reader‐accessible visualizations that cover a diverse range of chart types and combinations of our design dimensions. We evaluate a subset of these prototypes in a mixed‐methods study with 13 blind and visually impaired readers. Our findings demonstrate that these designs help users conceptualize data spatially, selectively attend to data of interest at different levels of granularity, and experience control and agency over their data analysis process. An accessible HTML version of this paper is available at: http://vis.csail.mit.edu/pubs/rich-screen-reader-vis-experiences.
Richard E. Ladner Aashaka Desai Jennifer Mankoff
19 April 2023
Speechreading is the art of using visual and contextual cues in the environment to support listening. Often used by d/Deaf and Hard-of-Hearing (d/DHH) individuals, it highlights nuances of rich communication. However, lived experiences of speechreaders are underdocumented in HCI literature, and the impact of online environments and interactions of captioning with speechreading has not been explored in depth. We bridge these gaps through a three-part study consisting of formative interviews, design probes, and design sessions with 12 d/DHH individuals who speechread. Our primary contribution is to understand the lived experience of speechreading in online communication, and thus to better understand the richness and variety of techniques d/DHH individuals use to provision access. We highlight technical, environmental and sociocultural factors that impact communication accessibility, explore the design space of speechreading supports and share considerations for the design future of speechreading technology.
Caluã de Lacerda Pataca Matthew Watkins Matt Huenerfauth + 2 lainnya
19 April 2023
Speech is expressive in ways that caption text does not capture, with emotion or emphasis information not conveyed. We interviewed eight Deaf and Hard-of-Hearing (dhh) individuals to understand if and how captions’ inexpressiveness impacts them in online meetings with hearing peers. Automatically captioned speech, we found, lacks affective depth, lending it a hard-to-parse ambiguity and general dullness. Interviewees regularly feel excluded, which some understand is an inherent quality of these types of meetings rather than a consequence of current caption text design. Next, we developed three novel captioning models that depicted, beyond words, features from prosody, emotions, and a mix of both. In an empirical study, 16 dhh participants compared these models with conventional captions. The emotion-based model outperformed traditional captions in depicting emotions and emphasis, with only a moderate loss in legibility, suggesting its potential as a more inclusive design for captions.
Emma Petersen D. Szafir Keke Wu + 3 lainnya
6 Mei 2021
Using visualization requires people to read abstract visual imagery, estimate statistics, and retain information. However, people with Intellectual and Developmental Disabilities (IDD) often process information differently, which may complicate connecting abstract visual information to real-world quantities. This population has traditionally been excluded from visualization design, and often has limited access to data related to their well being. We explore how visualizations may better serve this population. We identify three visualization design elements that may improve data accessibility: chart type, chart embellishment, and data continuity. We evaluate these elements with populations both with and without IDD, measuring accuracy and efficiency in a web-based online experiment with time series and proportion data. Our study identifies performance patterns and subjective preferences for people with IDD when reading common visualizations. These findings suggest possible solutions that may break the cognitive barriers caused by conventional design guidelines.
J. Stasko Arpit Narechania Arjun Srinivasan
24 Agustus 2020
Natural language interfaces (NLls) have shown great promise for visual data analysis, allowing people to flexibly specify and interact with visualizations. However, developing visualization NLIs remains a challenging task, requiring low-level implementation of natural language processing (NLP) techniques as well as knowledge of visual analytic tasks and visualization design. We present NL4DV, a toolkit for natural language-driven data visualization. NL4DV is a Python package that takes as input a tabular dataset and a natural language query about that dataset. In response, the toolkit returns an analytic specification modeled as a JSON object containing data attributes, analytic tasks, and a list of Vega-Lite specifications relevant to the input query. In doing so, NL4DV aids visualization developers who may not have a background in NLP, enabling them to create new visualization NLIs or incorporate natural language input within their existing systems. We demonstrate NL4DV's usage and capabilities through four examples: 1) rendering visualizations using natural language in a Jupyter notebook, 2) developing a NLI to specify and edit Vega-Lite charts, 3) recreating data ambiguity widgets from the DataTone system, and 4) incorporating speech input to create a multimodal visualization system.
Daftar Referensi
0 referensiTidak ada referensi ditemukan.