Autism Research Via Smartphone


One of the most effective means of investigating and understanding autism is eye tracking. Participants are shown photos or videos, and computer software records where their gaze rests. Autistic individuals are more likely to focus on nonsocial aspects of an image, such as objects or background patterns, while neurotypical subjects have an increased propensity to focus on people’s faces.

Ralph Adolphs, the Bren Professor of Psychology, Neuroscience, and Biology and an affiliated faculty member of the Tianqiao and Chrissy Chen Institute for Neuroscience, has been researching autism for decades as part of a larger project aimed at understanding the neuroscience of human social behavior. In his Emotion and Social Cognition Lab, researchers get a finer grasp on the mechanics of the brain when processing emotion and interacting with others by studying both neurotypical individuals and those who have brain damage or brain malformations or who have neuropsychiatric conditions such as obsessive compulsive disorder (OCD) or autism spectrum disorder (ASD).

Autism is a particularly rich field for research into emotion and social cognition since it is characterized by, among other things, differences in social behavior. Adolphs has been exploring its features by bringing adults with autism into the lab to track their eye movements when they are exposed to a variety of visual stimuli.

This research has yielded many interesting findings but has been inherently limited by the expense of laboratory eye tracking technology. Adolphs and others have therefore explored whether smartphones, which are able to display images and video and use camera technology to record, display, or share elements of the user’s face or environment, might be able to capture the same information that established eye-tracking technology already does but at considerably less expense.

Smartphone-based gaze estimation for in-home autism research” was recently published in the journal Autism Research. The authors on the paper include postdoc Na Yeon Kim and graduate student Qianying Wu, who together led the study, as well as co-authors Jasmin Turner, Lynn K. Paul, and Adolphs of Caltech; Daniel P. Kennedy of Indiana University; and Junfeng He, Kai Kohlhoff, Na Dai, and Vidhya Navalpakkam, who all work for Google Research in Mountain View, California, and who were responsible for analyzing the data.

Read more on the TCCI for Neuroscience website