Conversations between autistic and non-autistic people can go awry due to differing cognitive styles. Challenges that arise in workplace conversations such as job interviews or performance evaluations can lead to poor outcomes for autistic employees. FIT employs AI to identify verbal and non-verbal conversational cues that signify when interactions are going poorly. Our goal is to facilitate conversations and help the conversants repair miscommunications and misunderstandings.
This year, we are running two projects. The first is to build a prototype video calling platform built on top of WebRTC that can facilitate our studies of autistic/non-autistic 1:1 conversations. The second is to analyze a corpus of 1:1 video conversations for critical moments that lead to problems in the conversation and subsequent conversational repair. We will develop a set of metrics to identify good and bad moments in conversations.
Despite their advanced functionality, much of assistive technology (AT) is rejected or abandoned by individuals with disabilities. We explore how AT can influence the social standing and elicit stigma, and propose designing for systems to fit users’ social contexts. Our current focus is on developing ADHD-centered executive functioning support tools (ACES).
The NAPE project aims to improve the accessibility of educational materials by adapting them to the cognitive styles of individuals, including those with ADHD, dyslexia, and autism. It seeks to create an inclusive learning environment by removing traditional learning barriers.
Neurotypical people can’t always tell when their autistic colleages are experiencing distress from sensory overstimulation. A resulting lack of empathy can lead to stigma and discrimination against those autistic colleagues. Our goal is to help neurotypical people become better allies towards their autistic colleagues by educating them about autistic experiences. We center the autistic person’s perspective in an immersive VR lesson to explain the effects of sensory overstimulation and tell neurotypicals how they can help. Better understanding will lead to improved empathy.
Sighted people generally used visual and pointing references to indicate areas of interest when speaking to collaborators. However, blind and low vision people cannot understand these references. This leads to miscommunication, impeding their ability to pay attention to the same things and preventing effective and efficient collaboration. The GRACE project combines gaze and gesture recognition to locate areas of interests and identify the objects on the screen they may be referencing. Our system converts these references into a written form suitable to be announced via screen reader, thereby reducing the burden for blind and low vision users to locate the referenced object.