Skip to content

News & Events

Accessibility Research Lab

Various Presenters (Allen School)

Research Talk

Thursday, November 2, 2023, 3:30 pm

Abstract

Presenters: Venkatesh Potluri; Anant Mittal; Aashaka Desai & Aaleyah Lewis; Xia Su

Speaker: Venkatesh Potluri

Notably Inaccessible — Data Driven Understanding of Data Science Notebook (In)Accessibility

Abstract:
Computational notebooks, tools that facilitate storytelling through exploration, data analysis, and information visualization, have become the widely accepted standard in the data science community across academic and industry settings. While there is extensive research to learn how data scientists use computational notebooks, identify their pain points, and enable collaborative data science practices, very little is known about the various accessibility barriers experienced by blind and visually impaired (BVI) users using these notebooks. In this talk, I will present findings from our large scale systematic analysis of 100000 Jupyter notebooks to identify various accessibility challenges in published notebooks. Accessibility barriers are caused by the tools used, infrastructures available, and authoring practices that are followed to create and share these notebooks. I will discuss recommendations to improve accessibility of the data artifacts of a notebook, suggest authoring practices, and propose changes to infrastructure to make notebooks accessible.

Bio: Venkatesh is a PhD candidate advised by Jennifer Mankoff. He understands accessibility barriers experienced by blind or visually impaired developers participating in high-skilled programming domains such as data science, physical computing, and user interface design. His research efforts contribute new interaction techniques to accessibly program, and real-world systems that improve developer tools.

Speaker: Anant Mittal

Jod: Examining the Design and Implementation of a Videoconferencing Platform for Mixed Hearing Groups

Abstract:
Videoconferencing usage has surged in recent years, but current platforms present significant accessibility barriers for the 430 million d/Deaf or hard of hearing people worldwide. Informed by prior work examining accessibility barriers in current videoconferencing platforms, we designed and developed Jod, a videoconferencing platform to facilitate communication in mixed hearing groups. Key features include support for customizing visual layouts and a notification system to request attention and influence behavior. Using Jod, we conducted six mixed hearing group sessions with 34 participants, including 18 d/Deaf or hard of hearing participants, 10 hearing participants, and 6 sign language interpreters. We found participants engaged in visual layout rearrangements based on their hearing ability and dynamically adapted to the changing group communication context, and that notifications were useful but raised a need for designs to cause fewer interruptions. We further provide insights for future videoconferencing designs.

Bio: Anant is a 5th-year PhD student in the DUB group, advised by James Fogarty.
His interests lie in building systems for real-world HCI impact.

Speaker: Aashaka Desai, Aaleyah Lewis

Working at the Intersection of Race, Disability, and Technology

Abstract:
There has been little focus on the intersection of race and disability within accessibility research, missing out on the full nuance of marginalized and “otherized” groups. In this upcoming ASSETS paper, we offer a framework that outlines opportunities for research to engage the intersection of race and disability. We present a series of case studies that exemplifies engagement at this intersection throughout the course of research, along with reflections on teaching at this intersection. This work highlights the value of considering how constructs of race and disability work alongside each other within accessibility research studies, designs of socio-technical systems, and education.

Bio: Aashaka is a fourth-year PhD student at UW, advised by Jennifer Mankoff and Richard Ladner. Her research focuses on d/Deaf and hard-of-hearing communication accessibility and designing tech that supports languaging.

Bio: Aaleyah is a third-year PhD student at UW, advised by James Fogarty. Her research focuses on designing more equitable speech recognition systems that support people of color with disabilities who have varying dialects.

Speaker: Xia Su

RASSAR: Room Accessibility and Safety Scanning in Augmented Reality

Abstract:
The safety and accessibility of our homes is critical to quality of life and evolves as we age, become ill, host guests, or experience life events such as having children. Researchers and health professionals have created assessment instruments such as paper checklists that enable homeowners and trained experts to identify and potentially mitigate safety and access issues. With advances in computer vision, augmented reality (AR), and mobile sensors, new approaches are now possible. We introduce RASSAR, a mobile AR application for semi-automatically identifying, localizing, and visualizing indoor accessibility and safety issues such as an inaccessible table height or unsafe loose rugs using LiDAR and real-time computer vision. To inform RASSAR’s design, we conducted a formative study with 18 participants across five stakeholder groups. We report on our design findings, the implementation of RASSAR, and a technical performance evaluation across ten homes demonstrating state-of-the-art performance.

Bio: Xia Su is a 3rd year Ph.D. student in Computer Science at University of Washington. His research interest lies in Human Computer Interaction, with special focus on AR, accessibility and creativity support.