TEWI Colloquium
Prof. Qing Xu | College of Intelligence and Computing | Tianjin University – China
Thursday, July 17, 2025 at 02:00 p.m. @ S.2.37 | University of Klagenfurt
Abstract: Visual scanning and gaze behavior are fundamental to everyday sensorimotor activities such as walking and driving. The coordination of the eyes and head plays a foundational role in visual scanning, and a deep understanding of the spatiotemporal dynamics of this coordination is undoubtedly important. However, relatively little is known about the computational and higher-order interactions between eye and head movements during visual scanning in sensorimotor tasks. Moreover, eye gaze directly reflects attention and its allocation. Data-driven measurements of gaze behavior provide a quantitative assessment of visual attention, revealing the mental and physical states of individuals. Situational awareness offers a robust and semantically rich framework for interpreting gaze behavior during sensorimotor activities. Nevertheless, few works have measured eye gaze data specifically from the perspective of situational awareness.
In this talk, I will present two closely related studies. In the first part, we apply a recent information-theoretic tool, Partial Information Decomposition (PID), to quantify higher-order components — uniqueness, redundancy, and synergy — in the spatiotemporal interactions between eye movements and head motion time series during a VR/AR-based walking experience. To our knowledge, this is the first data-driven approach leveraging an information-theoretic framework to characterize higher-order interactions in eye–head coordination during sensorimotor activities. In the second part, we propose four novel computational measures of gaze behavior grounded in the concept of situational awareness. These measures effectively assess the efficiency of visual scanning and sensorimotor performance, exemplified here using a driving task. Our results demonstrate that the proposed measures are effective and outperform closely related existing methods.
Bio: His current research focuses on the application of theoretical tools in information theory and complexity systems, based on employing VR/AR techniques. Since 2018, he has been doing interdisciplinary research relevant to data-driven analysis and modeling of human behavior in a computational manner, mainly based on using eye gaze and behavioral data (including head motion data, muscle data, and so on) and employing probability and information theoretic tools. In a nutshell, he does research related to understanding, explaining and modeling of human behavior (eye gaze and eye movement), by the exploitation of theoretical tools, mainly in information theory and complexity systems. He utilizes and extends the latest Partial Information Decomposition (PID) and Assembly Theory (AT), from the perspective of higher-order interactions of complex systems. Also, he works on low-level signal (image/video) processing, visualization and Monte Carlo adaptive sampling based image synthesis.
More information: https://scholar.google.com/citations?hl=en&user=qg6D4doAAAAJ&view_op=list_works .