Stratification of Children with Autism Spectrum Disorder Through Fusion of Temporal Information in Eye-gaze Scan-Paths

Author:

Atyabi Adham1ORCID,Shic Frederick2ORCID,Jiang Jiajun3ORCID,Foster Claire E.4ORCID,Barney Erin5ORCID,Kim Minah6ORCID,Li Beibin7ORCID,Ventola Pamela8ORCID,Chen Chung Hao9ORCID

Affiliation:

1. University of Colorado Colorado Springs, Colorado Springs, CO, USA

2. University of Washington and Seattle Children’s Research Institute, Seattle, WA

3. Old Dominion University, Norfolk, VA

4. Binghamton University (SUNY), Binghamton, Binghamton, NY

5. Seattle Children’s Research Institute, Seattle, WA

6. University of Virginia, Charlottesville VA

7. University of Washington, Redmond, WA

8. Yale University, New Haven, CT

9. Old Dominion University, Norfolk

Abstract

Background: Looking pattern differences are shown to separate individuals with Autism Spectrum Disorder (ASD) and Typically Developing (TD) controls. Recent studies have shown that, in children with ASD, these patterns change with intellectual and social impairments, suggesting that patterns of social attention provide indices of clinically meaningful variation in ASD. Method: We conducted a naturalistic study of children with ASD (n = 55) and typical development (TD, n = 32). A battery of eye-tracking video stimuli was used in the study, including Activity Monitoring (AM), Social Referencing (SR), Theory of Mind (ToM), and Dyadic Bid (DB) tasks. This work reports on the feasibility of spatial and spatiotemporal scanpaths generated from eye-gaze patterns of these paradigms in stratifying ASD and TD groups. Algorithm: This article presents an approach for automatically identifying clinically meaningful information contained within the raw eye-tracking data of children with ASD and TD. The proposed mechanism utilizes combinations of eye-gaze scan-paths (spatial information), fused with temporal information and pupil velocity data and Convolutional Neural Network (CNN) for stratification of diagnosis (ASD or TD). Results: Spatial eye-gaze representations in the form of scanpaths in stratifying ASD and TD (ASD vs. TD: DNN: 74.4%) are feasible. These spatial eye-gaze features, e.g., scan-paths, are shown to be sensitive to factors mediating heterogeneity in ASD: age (ASD: 2–4 y/old vs. 10–17 y/old CNN: 80.5%), gender (Male vs. Female ASD: DNN: 78.0%) and the mixture of age and gender (5–9 y/old Male vs. 5–9 y/old Female ASD: DNN:98.8%). Limiting scan-path representations temporally increased variance in stratification performance, attesting to the importance of the temporal dimension of eye-gaze data. Spatio-Temporal scan-paths that incorporate velocity of eye movement in their images of eye-gaze are shown to outperform other feature representation methods achieving classification accuracy of 80.25%. Conclusion: The results indicate the feasibility of scan-path images to stratify ASD and TD diagnosis in children of varying ages and gender. Infusion of temporal information and velocity data improves the classification performance of our deep learning models. Such novel velocity fused spatio-temporal scan-path features are shown to be able to capture eye gaze patterns that reflect age, gender, and the mixed effect of age and gender, factors that are associated with heterogeneity in ASD and difficulty in identifying robust biomarkers for ASD.

Funder

NIH

Simons Foundation

Publisher

Association for Computing Machinery (ACM)

Subject

General Computer Science

Reference43 articles.

1. R. Carette F. Cilia G. Dequen J. Bosche J. L. Guerin and L. Vandromme. 2018. Automatic autism spectrum disorder detection thanks to eye-tracking and neural network-based approach. In Proceedings of the International Conference on IoT Technologies for HealthCare . Springer International Publishing 75–81.

2. X. Huang C. Shen X. Boix and Q. Zhao. 2015. SALICON: Reducing the semantic gap in saliency prediction by adapting deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision . 262–270.

3. M. Kümmerer L. Thesis and M. Bethge. 2015. Deep gaze I: Boosting saliency prediction with feature maps trained on imagenet. In Proc. Int. Conf. Learn. Represent. Workshops .

4. M. Kummerer T. S. A. Wallis and M. Bethge. 2016. Deepgaze ii: Reading fixations from deep features trained on object recognition. arXiv:1610.01563. Retrieved from https://arxiv.org/abs/1610.01563.

5. N. Liu J. Han D. Zhang S. Wen and T. Liu. 2015. Predicting eye fixations using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . 362–370.

Cited by 5 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3