Welcome to the VariAbility Lab at Carnegie Mellon University. Our research mission is to create inclusive workplaces where all people can be successful, without discrimination, especially those with disabilities and the neurodivergent.
Our current research projects focus on creating accommodative physical work environments, collaboration tools that facilitate communication between people with differing abilities, and educational programs that teach communication skills and improve social relationships between team members.
We are recruiting students (Masters and Undergraduates) for Spring 2025. If you are interested in working on any of our projects, please get in touch! Include your CV and a brief description of some research you'd like to do.
Despite their advanced functionality, much of assistive technology (AT) is rejected or abandoned by individuals with disabilities. We explore how AT can influence the social standing and elicit stigma, and propose designing for systems to fit users’ social contexts.
Our mission is to support the preparation, recruitment, persistence, advancement, and management of neurodivergent individuals in the workplace. We will mentor and support neurodivergent students to succeed in their educational goals in high school, college, and beyond. In addition, we focus on creating educational pedagogy to teach neurotypical coworkers how to work best with their neurodivergent colleagues.
We run the Neurodiversity at Work Research Workshop series, which was held in 2018, 2019, 2021, 2022, and 2023.
Andrew Begel also founded the Southern Great Lakes Region Neurodiversity at Work Hub.
The NAPE project aims to improve the accessibility of educational materials by adapting them to the cognitive styles of individuals, including those with ADHD, dyslexia, and autism. It seeks to create an inclusive learning environment by removing traditional learning barriers.
Neurotypical people can’t always tell when their autistic colleages are experiencing distress from sensory overstimulation. A resulting lack of empathy can lead to stigma and discrimination against those autistic colleagues. Our goal is to help neurotypical people become better allies towards their autistic colleagues by educating them about autistic experiences. We center the autistic person’s perspective in an immersive VR lesson to explain the effects of sensory overstimulation and tell neurotypicals how they can help. Better understanding will lead to improved empathy.
Conversations between autistic and non-autistic people can go awry due to differing cognitive styles. Challenges that arise in workplace conversations such as job interviews or performance evaluations can lead to poor outcomes for autistic employees. FIT employs AI to identify verbal and non-verbal conversational cues that signify when interactions are going poorly. Our goal is to facilitate conversations and help the conversants repair miscommunications and misunderstandings.
This year, we are running two projects. The first is to build a prototype video calling platform built on top of WebRTC that can facilitate our studies of autistic/non-autistic 1:1 conversations. The second is to analyze a corpus of 1:1 video conversations for critical moments that lead to problems in the conversation and subsequent conversational repair. We will develop a set of metrics to identify good and bad moments in conversations.
Neurodiversity describes natural variations in human cognition that differ from the dominant neurotype. All cognitive variations, including autism and ADHD, each have their own strengths, yet are rarely included in design processes for creating user experiences. Our goal is to investigate design issues related to the pain points, needs, and desires of neurodivergent computer users and reduce the divergence between the attributes of our users’ cognitive styles and the expectations of our software. Our findings are being used to develop neurodivergent user personas that can help designers heuristically evaluate and improve the user experiences they embed into their software.
Sighted people generally used visual and pointing references to indicate areas of interest when speaking to collaborators. However, blind and low vision people cannot understand these references. This leads to miscommunication, impeding their ability to pay attention to the same things and preventing effective and efficient collaboration. The GRACE project combines gaze and gesture recognition to locate areas of interests and identify the objects on the screen they may be referencing. Our system converts these references into a written form suitable to be announced via screen reader, thereby reducing the burden for blind and low vision users to locate the referenced object.