Affiliation:
1. Stanford University, Stanford CA
Abstract
Film directors are masters at controlling what we look at when we watch a film. However, there have been few quantitative studies of how gaze responds to cinematographic conventions thought to influence attention. We have collected and are releasing a dataset designed to help investigate eye movements in response to higher level features such as faces, dialogue, camera movements, image composition, and edits. The dataset, which will be released to the community, includes gaze information for 21 viewers watching 15 clips from live action 2D films, which have been hand annotated for high level features. This work has implications for the media studies, display technology, immersive reality, and human cognition.
Funder
Stanford's Google Graduate Fellowship Fund and NSF
Publisher
Association for Computing Machinery (ACM)
Subject
Experimental and Cognitive Psychology,General Computer Science,Theoretical Computer Science
Cited by
16 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. High-level cinematic knowledge to predict inter-observer visual congruency;Proceedings of the 2023 ACM International Conference on Interactive Media Experiences Workshops;2023-06-12
2. Saccade Direction Information Channel;Neural Information Processing;2023
3. Detecting Input Recognition Errors and User Errors using Gaze Dynamics in Virtual Reality;The 35th Annual ACM Symposium on User Interface Software and Technology;2022-10-28
4. Image Saliency Prediction in Novel Production Scenarios;2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC);2022-10-09
5. Immersion Measurement in Watching Videos Using Eye-tracking Data;IEEE Transactions on Affective Computing;2022-10-01