DOI: 10.1145/3544548.3580749
Terbit pada 19 April 2023 Pada International Conference on Human Factors in Computing Systems

BAGEL: An Approach to Automatically Detect Navigation-Based Web Accessibility Barriers for Keyboard Users

Paul T. Chiou Ali S. Alotaibi William G. J. Halfond

Abstrak

The Web has become an essential part of many people’s daily lives, enabling them to complete everyday and essential tasks online and access important information resources. The ability to navigate the Web via the keyboard interface is critical to people with various types of disabilities. However, modern websites often violate web accessibility guidelines for keyboard navigability. In this paper, we present a novel approach for automatically detecting web accessibility barriers that prevent or hinder keyboard users’ ability to navigate web pages. An extensive evaluation of our technique on real-world subjects showed that our technique was able to detect navigation-based keyboard accessibility barriers in web applications with high precision and recall.

Artikel Ilmiah Terkait

Large-scale study of web accessibility metrics

Carlos Duarte Beatriz Martins

9 Desember 2022

Evaluating the accessibility of web resources is usually done by checking the conformance of the resource against a standard or set of guidelines (e.g., the WCAG 2.1). The result of the evaluation will indicate what guidelines are respected (or not) by the resource. While it might hint at the accessibility level of web resources, often it will be complicated to compare the level of accessibility of different resources or of different versions of the same resource from evaluation reports. Web accessibility metrics synthesize the accessibility level of a web resource into a quantifiable value. The fact that there is a wide number of accessibility metrics, makes it challenging to choose which ones to use. In this paper, we explore the relationship between web accessibility metrics. For that purpose, we investigated eleven web accessibility metrics. The metrics were computed from automated accessibility evaluations obtained using QualWeb. A set of around three million web pages were evaluated. By computing the metrics over this sample of nearly three million web pages, it was possible to identify groups of metrics that offer similar results. Our analysis shows that there are metrics that behave similarly, which, when deciding what metrics to use, assists in picking the metric that is less resource intensive or for which it might be easier to collect the inputs.

A Probabilistic Model and Metrics for Estimating Perceived Accessibility of Desktop Applications in Keystroke-Based Non-Visual Interactions

Syed Masum Billah Md. Touhidul Islam Donald E. Porter

19 April 2023

Perceived accessibility of an application is a subjective measure of how well an individual with a particular disability, skills, and goals experiences the application via assistive technology. This paper first presents a study with 11 blind users to report how they perceive the accessibility of desktop applications while interacting via assistive technology such as screen readers and a keyboard. The study identifies the low navigational complexity of the user interface (UI) elements as the primary contributor to higher perceived accessibility of different applications. Informed by this study, we develop a probabilistic model that accounts for the number of user actions needed to navigate between any two arbitrary UI elements within an application. This model contributes to the area of computational interaction for non-visual interaction. Next, we derive three metrics from this model: complexity, coverage, and reachability, which reveal important statistical characteristics of an application indicative of its perceived accessibility. The proposed metrics are appropriate for comparing similar applications and can be fine-tuned for individual users to cater to their skills and goals. Finally, we present five use cases, demonstrating how blind users, application developers, and accessibility practitioners can benefit from our model and metrics.

ALL: Accessibility Learning Labs for Computing Accessibility Education

Daniel E. Krutz Samuel A. Malachowsky Saad Khan + 1 lainnya

26 Juni 2021

Our Accessibility Learning Labs not only inform participants about the need for accessible software, but also how to properly create and implement accessible software. These experiential browser-based labs enable participants, instructors and practitioners to engage in our material using only their browser. In the following document, we will provide a brief overview of our labs, how they may be adopted, and some of their preliminary results. Complete project material is publicly available on our project website: http://all.rit.edu

AXNav: Replaying Accessibility Tests from Natural Language

Maryam Taeb Ruijia Cheng E. Schoop + 3 lainnya

3 Oktober 2023

Developers and quality assurance testers often rely on manual testing to test accessibility features throughout the product lifecycle. Unfortunately, manual testing can be tedious, often has an overwhelming scope, and can be difficult to schedule amongst other development milestones. Recently, Large Language Models (LLMs) have been used for a variety of tasks including automation of UIs. However, to our knowledge, no one has yet explored the use of LLMs in controlling assistive technologies for the purposes of supporting accessibility testing. In this paper, we explore the requirements of a natural language based accessibility testing workflow, starting with a formative study. From this we build a system that takes a manual accessibility test instruction in natural language (e.g., “Search for a show in VoiceOver”) as input and uses an LLM combined with pixel-based UI Understanding models to execute the test and produce a chaptered, navigable video. In each video, to help QA testers, we apply heuristics to detect and flag accessibility issues (e.g., Text size not increasing with Large Text enabled, VoiceOver navigation loops). We evaluate this system through a 10-participant user study with accessibility QA professionals who indicated that the tool would be very useful in their current work and performed tests similarly to how they would manually test the features. The study also reveals insights for future work on using LLMs for accessibility testing.

Assistive-Technology Aided Manual Accessibility Testing in Mobile Apps, Powered by Record-and-Replay

Navid Salehnamadi Ziyao He S. Malek

19 April 2023

Billions of people use smartphones on a daily basis, including 15% of the world’s population with disabilities. Mobile platforms encourage developers to manually assess their apps’ accessibility in the way disabled users interact with phones, i.e., through Assistive Technologies (AT) like screen readers. However, most developers only test their apps with touch gestures and do not have enough knowledge to use AT properly. Moreover, automated accessibility testing tools typically do not consider AT. This paper introduces a record-and-replay technique that records the developers’ touch interactions, replays the same actions with an AT, and generates a visualized report of various ways of interacting with the app using ATs. Empirical evaluation of this technique on real-world apps revealed that while user study is the most reliable way of assessing accessibility, our technique can aid developers in detecting complex accessibility issues at different stages of development.

Daftar Referensi

1 referensi

Artikel yang Mensitasi

0 sitasi

Tidak ada artikel yang mensitasi.