Decoding task from oculomotor behavior in virtual reality

Autor(en): Keshava, A. 
Aumeistere, A.
Izdebski, K.
König, P. 
Herausgeber: Spencer, S.N.
Stichwörter: Decoding; Design intention; Different sizes; Eye movements; F1 scores; Object size; Point of regards; Support vector machines; Task inference; Time points, Eye tracking; Virtual reality; Virtual reality, Cross validation
Erscheinungsdatum: 2020
Herausgeber: Association for Computing Machinery
Journal: Eye Tracking Research and Applications Symposium (ETRA)
Zusammenfassung: 
In the present study, we aim to explore whether and how well we can predict tasks based on eye movements in a virtual environment. We designed four different tasks in which participants had to align two cubes of different sizes. To define where participants looked, we used a ray-based method to calculate the point-of-regard (POR) on each cube at each time point. Using leave-one-subject-out cross-validation, our model performed well with an f1-score of 0.51 ± 0.17 (chance level 0.25) in predicting the four alignment types. Results suggest that the type of task can be decoded based on the aggregation of PORs. We further discuss the implications of object size on task inference and thus set an exciting road-map for how to design intention recognition experiments in virtual reality. © 2020 ACM.
Beschreibung: 
Conference of 2020 ACM Symposium on Eye Tracking Research and Applications, ETRA 2020 ; Conference Date: 2 June 2020 Through 5 June 2020; Conference Code:160051
ISBN: 9781450371346
DOI: 10.1145/3379156.3391338
Externe URL: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85085736693&doi=10.1145%2f3379156.3391338&partnerID=40&md5=6a2387a7a93322c10a3f02643b48e60d

Zur Langanzeige

Seitenaufrufe

3
Letzte Woche
0
Letzter Monat
1
geprüft am 15.05.2024

Google ScholarTM

Prüfen

Altmetric