Face Perception and Audio-Visual Integration in Children with Autism Spectrum Disorder
Funded by the E.K. Shriver Center
Individuals with autism spectrum disorders (ASD) have been shown to avoid eye contact during social interaction and - maybe as a result - have great difficulty integrating verbal and non-verbal communicative information during face-to-face interaction. This study ("Look Who’s Talking") is designed to be a step towards relating these two skills and investigating how each individual’s lack of looking at a speaker’s eyes affects their face-to-face emotional communication skills. The study consists of two interrelated tasks using eye-tracking technology to assess looking patters during tasks requiring integration of auditory (spoken language) and visual (facial expressions) information. Subjects are asked to determine audio-visual (AV) synchronicity in a split-screen video, where the audio track randomly switches synchronicity between the two sides of the screen. By repeating this paradigm with different instructions we expect to determine differences for AV integration and looking patterns between explicit and implicit task designs. Subjects are also presented with emotional auditory-only sentences, which they must then match to one of two emotional facial expressions. The aim of this study is to determine sensitivity of reaction time, accuracy, and looking patterns in response to auditory and facial expressions of varying intensity. Dr. Grossman’s goal is to determine the looking patterns of individuals with ASD in response to auditory-visual language information, as well as determine whether explicit instructions can change the natural looking patterns of these adolescents with ASD. We hope to ultimately help shape goals for intervention strategies of social-pragmatic language in this population.