“It’s Complicated”: Negotiating Accessibility and (Mis)Representation in Image Descriptions of Race, Gender, and Disability
Abstrak
Content creators are instructed to write textual descriptions of visual content to make it accessible; yet existing guidelines lack specifics on how to write about people’s appearance, particularly while remaining mindful of consequences of (mis)representation. In this paper, we report on interviews with screen reader users who were also Black, Indigenous, People of Color, Non-binary, and/or Transgender on their current image description practices and preferences, and experiences negotiating theirs and others’ appearances non-visually. We discuss these perspectives, and the ethics of humans and AI describing appearance characteristics that may convey the race, gender, and disabilities of those photographed. In turn, we share considerations for more carefully describing appearance, and contexts in which such information is perceived salient. Finally, we offer tensions and questions for accessibility research to equitably consider politics and ecosystems in which technologies will embed, such as potential risks of human and AI biases amplifying through image descriptions.
Artikel Ilmiah Terkait
John Tang Edward Cutrell Martez E. Mott
19 April 2023
Profile pictures can convey rich social signals that are often inaccessible to blind and low vision screen reader users. Although there have been efforts to understand screen reader users’ preferences for alternative (alt) text descriptions when encountering images online, profile pictures evoke distinct information needs. We conducted semi-structured interviews with 16 screen reader users to understand their preferences for various styles of profile picture image descriptions in different social contexts. We also interviewed seven sighted individuals to explore their thoughts on authoring alt text for profile pictures. Our findings suggest that detailed image descriptions and user narrated alt text can provide screen reader users enjoyable and informative experiences when exploring profile pictures. We also identified mismatches between how sighted individuals would author alt text with what screen reader users prefer to know about profile pictures. We discuss the implications of our findings for social applications that support profile pictures.
M. Scheuerman Kandrea Wade Caitlin Lustig + 1 lainnya
28 Mei 2020
Race and gender have long sociopolitical histories of classification in technical infrastructures-from the passport to social media. Facial analysis technologies are particularly pertinent to understanding how identity is operationalized in new technical systems. What facial analysis technologies can do is determined by the data available to train and evaluate them with. In this study, we specifically focus on this data by examining how race and gender are defined and annotated in image databases used for facial analysis. We found that the majority of image databases rarely contain underlying source material for how those identities are defined. Further, when they are annotated with race and gender information, database authors rarely describe the process of annotation. Instead, classifications of race and gender are portrayed as insignificant, indisputable, and apolitical. We discuss the limitations of these approaches given the sociohistorical nature of race and gender. We posit that the lack of critical engagement with this nature renders databases opaque and less trustworthy. We conclude by encouraging database authors to address both the histories of classification inherently embedded into race and gender, as well as their positionality in embedding such classifications.
Jaylin Herskovitz Robin Brewer Rahaf Alharbi + 2 lainnya
13 Agustus 2024
Blind people use artificial intelligence-enabled visual assistance technologies (AI VAT) to gain visual access in their everyday lives, but these technologies are embedded with errors that may be difficult to verify non-visually. Previous studies have primarily explored sighted users' understanding of AI output and created vision-dependent explainable AI (XAI) features. We extend this body of literature by conducting an in-depth qualitative study with 26 blind people to understand their verification experiences and preferences. We begin by describing errors blind people encounter, highlighting how AI VAT fails to support complex document layouts, diverse languages, and cultural artifacts. We then illuminate how blind people make sense of AI through experimenting with AI VAT, employing non-visual skills, strategically including sighted people, and cross-referencing with other devices. Participants provided detailed opportunities for designing accessible XAI, such as affordances to support contestation. Informed by disability studies framework of misfitting and fitting, we unpacked harmful assumptions with AI VAT, underscoring the importance of celebrating disabled ways of knowing. Lastly, we offer practical takeaways for Responsible AI practice to push the field of accessible XAI forward.
Nils Rodrigues Nelusa Pathmanathan Seyda Öney + 5 lainnya
27 April 2022
We present an exploratory study on the accessibility of images in publications when viewed with color vision deficiencies (CVDs). The study is based on 1,710 images sampled from a visualization dataset (VIS30K) over five years. We simulated four CVDs on each image. First, four researchers (one with a CVD) identified existing issues and helpful aspects in a subset of the images. Based on the resulting labels, 200 crowdworkers provided 30,000 ratings on present CVD issues in the simulated images. We analyzed this data for correlations, clusters, trends, and free text comments to gain a first overview of paper figure accessibility. Overall, about 60 % of the images were rated accessible. Furthermore, our study indicates that accessibility issues are subjective and hard to detect. On a meta-level, we reflect on our study experience to point out challenges and opportunities of large-scale accessibility studies for future research directions.
Leah Findlater Kelly Avery Mack Jon E. Froehlich + 3 lainnya
12 Januari 2021
Accessibility research has grown substantially in the past few decades, yet there has been no literature review of the field. To understand current and historical trends, we created and analyzed a dataset of accessibility papers appearing at CHI and ASSETS since ASSETS’ founding in 1994. We qualitatively coded areas of focus and methodological decisions for the past 10 years (2010-2019, N=506 papers), and analyzed paper counts and keywords over the full 26 years (N=836 papers). Our findings highlight areas that have received disproportionate attention and those that are underserved—for example, over 43% of papers in the past 10 years are on accessibility for blind and low vision people. We also capture common study characteristics, such as the roles of disabled and nondisabled participants as well as sample sizes (e.g., a median of 13 for participant groups with disabilities and older adults). We close by critically reflecting on gaps in the literature and offering guidance for future work in the field.
Daftar Referensi
1 referensiArtikel yang Mensitasi
3 sitasiMisfitting With AI: How Blind People Verify and Contest AI Errors
Jaylin Herskovitz Robin Brewer + 3 lainnya
13 Agustus 2024
Blind people use artificial intelligence-enabled visual assistance technologies (AI VAT) to gain visual access in their everyday lives, but these technologies are embedded with errors that may be difficult to verify non-visually. Previous studies have primarily explored sighted users' understanding of AI output and created vision-dependent explainable AI (XAI) features. We extend this body of literature by conducting an in-depth qualitative study with 26 blind people to understand their verification experiences and preferences. We begin by describing errors blind people encounter, highlighting how AI VAT fails to support complex document layouts, diverse languages, and cultural artifacts. We then illuminate how blind people make sense of AI through experimenting with AI VAT, employing non-visual skills, strategically including sighted people, and cross-referencing with other devices. Participants provided detailed opportunities for designing accessible XAI, such as affordances to support contestation. Informed by disability studies framework of misfitting and fitting, we unpacked harmful assumptions with AI VAT, underscoring the importance of celebrating disabled ways of knowing. Lastly, we offer practical takeaways for Responsible AI practice to push the field of accessible XAI forward.
“It’s Kind of Context Dependent”: Understanding Blind and Low Vision People’s Video Accessibility Preferences Across Viewing Scenarios
Crescentia Jung Shiri Azenkot + 3 lainnya
16 Maret 2024
While audio description (AD) is the standard approach for making videos accessible to blind and low vision (BLV) people, existing AD guidelines do not consider BLV users’ varied preferences across viewing scenarios. These scenarios range from how-to videos on YouTube, where users seek to learn new skills, to historical dramas on Netflix, where a user’s goal is entertainment. Additionally, the increase in video watching on mobile devices provides an opportunity to integrate nonverbal output modalities (e.g., audio cues, tactile elements, and visual enhancements). Through a formative survey and 15 semi-structured interviews, we identified BLV people’s video accessibility preferences across diverse scenarios. For example, participants valued action and equipment details for how-to videos, tactile graphics for learning scenarios, and 3D models for fantastical content. We define a six-dimensional video accessibility design space to guide future innovation and discuss how to move from “one-size-fits-all” paradigms to scenario-specific approaches.
Accessibility of Profile Pictures: Alt Text and Beyond to Express Identity Online
John Tang Edward Cutrell + 1 lainnya
19 April 2023
Profile pictures can convey rich social signals that are often inaccessible to blind and low vision screen reader users. Although there have been efforts to understand screen reader users’ preferences for alternative (alt) text descriptions when encountering images online, profile pictures evoke distinct information needs. We conducted semi-structured interviews with 16 screen reader users to understand their preferences for various styles of profile picture image descriptions in different social contexts. We also interviewed seven sighted individuals to explore their thoughts on authoring alt text for profile pictures. Our findings suggest that detailed image descriptions and user narrated alt text can provide screen reader users enjoyable and informative experiences when exploring profile pictures. We also identified mismatches between how sighted individuals would author alt text with what screen reader users prefer to know about profile pictures. We discuss the implications of our findings for social applications that support profile pictures.