default search action
ETRA 2022: Seattle, WA, USA
- Frederick Shic, Enkelejda Kasneci, Mohamed Khamis, Hans Gellersen, Krzysztof Krejtz, Daniel Weiskopf, Tanja Blascheck, Jessica Bradshaw, Hana Vrzakova, Kamran Binaee, Michael Burch, Peter Kiefer, Roman Bednarik, Diako Mardanbegi, Christopher Clarke, Rakshit Sunil Kothari, Vijay Rajanna, Sampath Jayarathna, Arantxa Villanueva, Adham Atyabi, Shahram Eivazi:
ETRA 2022: Symposium on Eye Tracking Research and Applications, Seattle, WA, USA, June 8 - 11, 2022. ACM 2022, ISBN 978-1-4503-9252-5
ETRA Short Papers
- Ahmed Al-Hindawi, Marcela P. Vizcaychipi, Yiannis Demiris:
Faster, Better Blink Detection through Curriculum Learning by Augmentation. 1:1-1:7 - Negar Alinaghi, Ioannis Giannopoulos:
Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking. 2:1-2:7 - Mohammed Safayet Arefin, J. Edward Swan II, Russell A. Cohen Hoffing, Steven M. Thurman:
Estimating Perceptual Depth Changes with Eye Vergence and Interpupillary Distance using an Eye Tracker in Virtual Reality. 3:1-3:7 - Somnath Arjun, Archana Hebbar, M. Sanjana, Pradipta Biswas:
VR Cognitive Load Dashboard for Flight Simulator. 4:1-4:4 - Samantha Aziz, Oleg Komogortsev:
An Assessment of the Eye Tracking Signal Quality Captured in the HoloLens 2. 5:1-5:6 - Riccardo Bovo, Daniele Giunchi, Ludwig Sidenmark, Hans Gellersen, Enrico Costanza, Thomas Heinis:
Real-time head-based deep-learning model for gaze probability regions in collaborative VR. 6:1-6:8 - Efe Bozkir, Gjergji Kasneci, Sonja Utz, Enkelejda Kasneci:
Regressive Saccadic Eye Movements on Fake News. 7:1-7:7 - Nora Castner, Jonas Frankemölle, Constanze Keutel, Fabian Hüttig, Enkelejda Kasneci:
LSTMs can distinguish dental expert saccade behavior with high "plaque-urracy". 8:1-8:7 - Agata Rodziewicz-Cybulska, Krzysztof Krejtz, Andrew T. Duchowski, Izabela Krejtz:
Measuring Cognitive Effort with Pupillary Activity and Fixational Eye Movements When Reading: Longitudinal Comparison of Children With and Without Primary Music Education. 9:1-9:8 - Brendan David-John, Kevin R. B. Butler, Eakta Jain:
For Your Eyes Only: Privacy-preserving eye-tracking datasets. 10:1-10:6 - Wolfgang Fuhl, Enkelejda Kasneci:
HPCGen: Hierarchical K-Means Clustering and Level Based Principal Components for Scan Path Genaration. 11:1-11:7 - Christos Gkoumas, Andria Shimi:
"The more you explore, the less you remember": unraveling the effects of scene clutter on learning and memory for targets. 12:1-12:7 - Yuxuan Guo, Sebastian Pannasch, Jens R. Helmert:
Eye Movements in Extended Tasks: Analyses of Ambient/Focal Attention with Coefficient K. 13:1-13:7 - Yutaro Inoue, Koki Koshikawa, Kentaro Takemura:
Gaze Estimation with Imperceptible Marker Displayed Dynamically using Polarization. 14:1-14:5 - Swati Jindal, Harsimran Kaur, Roberto Manduchi:
Tracker/Camera Calibration for Accurate Automatic Gaze Annotation of Images and Videos. 15:1-15:6 - Moritz Langner, Nico Aßfalg, Peyman Toreini, Alexander Maedche:
EyeLikert: Eye-based Interactions for Answering Surveys. 16:1-16:3 - Beibin Li, James C. Snider, Quan Wang, Sachin Mehta, Claire E. Foster, Erin Barney, Linda G. Shapiro, Pamela Ventola, Frederick Shic:
Calibration Error Prediction: Ensuring High-Quality Mobile Eye-Tracking. 17:1-17:7 - Xiaoyi Liu, Christoph Huber-Huber, David Melcher:
The Trans-Saccadic Extrafoveal Preview Effect is Modulated by Object Visibility. 18:1-18:7 - Jaime Maldonado, Lino Antoni Giefer:
On the Use of Distribution-based Metrics for the Evaluation of Drivers' Fixation Maps Against Spatial Baselines. 19:1-19:7 - Johannes Meyer, Tobias Wilm, Reinhold Fiess, Thomas Schlebusch, Wilhelm Stork, Enkelejda Kasneci:
A Holographic Single-Pixel Stereo Camera Sensor for Calibration-free Eye-Tracking in Retinal Projection Augmented Reality Glasses. 20:1-20:7 - Vsevolod Peysakhovich, Wietse D. Ledegang, Mark M. J. Houben, Eric L. Groen:
Classification of flight phases based on pilots' visual scanning strategies. 21:1-21:7 - Paul Prasse, David R. Reich, Silvia Makowski, Lena A. Jäger, Tobias Scheffer:
Fairness in Oculomotoric Biometric Identification. 22:1-22:8 - David R. Reich, Paul Prasse, Chiara Tschirner, Patrick Haller, Frank Goldhammer, Lena A. Jäger:
Inferring Native and Non-Native Human Reading Comprehension and Subjective Text Difficulty from Scanpaths in Reading. 23:1-23:8 - Tim Rolff, Frank Steinicke, Simone Frintrop:
When do Saccades begin? Prediction of Saccades as a Time-to-Event Problem. 24:1-24:7 - Morva Saaty, Mahmoud Reza Hashemi:
Game Audio Impacts on Players' Visual Attention, Model Performance for Cloud Gaming. 25:1-25:7 - Marian Sauter, Tobias Wagner, Anke Huckauf:
Distance between gaze and laser pointer predicts performance in video-based e-learning independent of the presence of an on-screen instructor. 26:1-26:10 - Shreshth Saxena, Elke Lange, Lauren Fink:
Towards efficient calibration for webcam eye-tracking in online experiments. 27:1-27:7 - Lisa Spitzer, Stefanie Müller:
Using a test battery to compare three remote, video-based eye-trackers. 28:1-28:7 - Stefan Stojanov, Sachin S. Talathi, Abhishek Sharma:
The Benefits of Depth Information for Head-Mounted Gaze Estimation. 29:1-29:7 - Clare Teng, Lok Hin Lee, Jayne Lander, Lior Drukker, Aris T. Papageorghiou, J. Alison Noble:
Skill Characterisation of Sonographer Gaze Patterns during Second Trimester Clinical Fetal Ultrasounds using Time Curves. 30:1-30:7 - Clare Teng, Harshita Sharma, Lior Drukker, Aris T. Papageorghiou, J. Alison Noble:
Visualising Spatio-Temporal Gaze Characteristics for Exploratory Data Analysis in Clinical Fetal Ultrasound Scans. 31:1-31:6 - Katarzyna Wisiecka, Krzysztof Krejtz, Izabela Krejtz, Damian Sromek, Adam Cellary, Beata Lewandowska, Andrew T. Duchowski:
Comparison of Webcam and Remote Eye Tracking. 32:1-32:7 - Zehua Zhang, David Crandall, Michael J. Proulx, Sachin S. Talathi, Abhishek Sharma:
Can Gaze Inform Egocentric Action Recognition? 33:1-33:7
ETRA Doctoral Symposium
- Kathrin Kennel:
Using Eye Tracking Data for Enhancing Adaptive Learning Systems. 34:1-34:3 - Gavindya Jayawardena:
Introducing a Real-Time Advanced Eye Movements Analysis Pipeline. 35:1-35:2 - Bhanuka Mahanama:
Multi-User Eye-Tracking. 36:1-36:3 - Nora Castner, Bela Umlauf, Ard Kastrati, Martyna Beata Plomecka, William Schaefer, Enkelejda Kasneci, Zoya Bylinskii:
A gaze-based study design to explore how competency evolves during a photo manipulation task. 37:1-37:3 - Nishan Gunawardena, Jeewani Anupama Ginige, Bahman Javadi, Gough Lui:
Mobile Device Eye Tracking on Dynamic Visual Contents using Edge Computing and Deep Learning. 38:1-38:3 - Toshiya Isomoto, Shota Yamanaka, Buntarou Shizuki:
Interaction Design of Dwell Selection Toward Gaze-based AR/VR Interaction. 39:1-39:2 - Bhanuka Mahanama, Gavindya Jayawardena, Yasasi Abeysinghe, Vikas Ashok, Sampath Jayarathna:
Multidisciplinary Reading Patterns of Digital Documents. 40:1-40:2 - Kristina Miklosova, Zuzana Cerneková, Elena Sikudová:
Saliency Methods Analysis for Paintings. 41:1-41:2 - Florence Paris, Rémy Casanova, Marie-Line Bergeonneau, Daniel Mestre:
Characterizing the expertise of Aircraft Maintenance Technicians using eye-tracking. 42:1-42:3 - Siyuan Peng, Naser Al Madi:
An Eye Opener on the Use of Machine Learning in Eye Movement Based Authentication. 43:1-43:2 - Szymon Tamborski, Michal Meina, Joanna Gorgol, Maciej M. Bartuzel, Krystian Wrobel, Anna Szkulmowska, Maciej Szkulmowski:
FreezEye Tracker - novel fast and precise platform for retinal eye-tracking system for psychophysical experiments. 44:1-44:2 - Saki Tanaka, Airi Tsuji, Kaori Fujinami:
Poster: A Preliminary Investigation on Eye Gaze-based Concentration Recognition during Silent Reading of Text. 45:1-45:2
Session 1: Eye-based prediction of test performance
- Tobias Appel, Lisa Bardach, Enkelejda Kasneci:
Predicting Decision-Making during an Intelligence Test via Semantic Scanpath Comparisons. 46:1-46:5 - Marian Sauter, Teresa Hirzle, Tobias Wagner, Susanne Hummel, Enrico Rukzio, Anke Huckauf:
Can Eye Movement Synchronicity Predict Test Performance With Unreliably-Sampled Data in an Online Learning Context? 47:1-47:5
Session 2: Mind wandering and multitasking
- Lidia Altamura, Ladislao Salmerón, Yvonne Kammerer:
Instant messaging multitasking while reading: a pilot eye-tracking study. 48:1-48:6 - Francesca Zermiani, Andreas Bulling, Maria Wirzberger:
Mind Wandering Trait-level Tendencies During Lecture Viewing: A Pilot Study. 49:1-49:7
Session 3: Gaze visualizations for education and learning
- Daun Kim, Jae-Yeop Jeong, Sumin Hong, Namsub Kim, Jin-Woo Jeong:
Visualizing Instructor's Gaze Information for Online Video-based Learning: Preliminary Study. 50:1-50:6 - Stanislav Popelka, Marketa Beitlova:
Scanpath Comparison using ScanGraph for Education and Learning Purposes: Summary of previous educational studies performed with the use of ScanGraph. 51:1-51:6
Session 4: Methods
- Sara Becker, Andreas Obersteiner, Anika Dreher:
Eye tracking - promising method for analyzing mathematics teachers' assessment competencies? 52:1-52:4 - Michael Burch, Rahel Haymoz, Sabrina Lindau:
The Benefits and Drawbacks of Eye Tracking for Improving Educational Systems. 53:1-53:5
PLEY
- Borna Fatehi, Casper Harteveld, Christoffer Holmgård:
Guiding Game Design Decisions via Eye-Tracking: An Indie Game Case Study. 54:1-54:7 - Michael Lankes, Maurice Sporn, Andreas Winkelbauer, Barbara Stiglbauer:
Looking Confused? - Introducing a VR Game Design for Arousing Confusion Among Players. 55:1-55:6 - Peter A. Smith, Matt Dombrowski, Shea McLinden, Calvin MacDonald, Devon Lynn, John Sparkman, Dominique Courbin, Albert Manero:
Advancing dignity for adaptive wheelchair users via a hybrid eye tracking and electromyography training game. 56:1-56:7
ETVIS
- Michael Burch, Günter Wallner, Veerle Fürst, Teodor-Cristian Lungu, Daan Boelhouwers, Dhiksha Rajasekaran, Richard Farla, Sander van Heesch:
Linked and Coordinated Visual Analysis of Eye Movement Data. 57:1-57:6 - Stanislav Popelka, Alena Vondráková, Romana Skulnikova:
The Effect of Day and Night Mode on the Perception of Map Navigation Device. 58:1-58:6 - Nils Rodrigues, Lin Shao, Jia Jun Yan, Tobias Schreck, Daniel Weiskopf:
Eye Gaze on Scatterplot: Concept and First Results of Recommendations for Exploration of SPLOMs Using Implicit Data Selection. 59:1-59:7 - Yao Wang, Maurice Koch, Mihai Bâce, Daniel Weiskopf, Andreas Bulling:
Impact of Gaze Uncertainty on AOIs in Information Visualisations. 60:1-60:6
COGAIN
- Tomomi Okano, Minoru Nakayama:
Research on Time Series Evaluation of Cognitive Load Factors using Features of Eye Movement. 61:1-61:6 - Baosheng James Hou, John Paulin Hansen, Cihan Uyanik, Per Bækgaard, Sadasivan Puthusserypady, Jacopo M. Araujo, I. Scott MacKenzie:
Feasibility of a Device for Gaze Interaction by Visually-Evoked Brain Signals. 62:1-62:7 - Jacek Matulewski, Mateusz Patera:
Usability of the super-vowel for gaze-based text entry. 63:1-63:5 - Heiko Drewes, Sophia Sakel, Heinrich Hussmann:
User Perception of Smooth Pursuit Target Speed. 64:1-64:7 - Teresa Hirzle, Marian Sauter, Tobias Wagner, Susanne Hummel, Enrico Rukzio, Anke Huckauf:
Attention of Many Observers Visualized by Eye Movements. 65:1-65:7 - Katharina Reiter, Ken Pfeuffer, Augusto Esteves, Tim Mittermeier, Florian Alt:
Look & Turn: One-handed and Expressive Menu Interaction by Gaze and Arm Turns in VR. 66:1-66:7
OpenEDS
- Samantha Aziz, Dillon J. Lohr, Oleg Komogortsev:
SynchronEyes: A Novel, Paired Data Set of Eye Movements Recorded Simultaneously with Remote and Wearable Eye-Tracking Devices. 67:1-67:6 - Dmytro Katrychuk, Oleg V. Komogortsev:
A study on the generalizability of Oculomotor Plant Mathematical Model. 68:1-68:7 - Conny Lu, Qian Zhang, Kapil Krishnakumar, Jixu Chen, Henry Fuchs, Sachin S. Talathi, Kun Liu:
Geometry-Aware Eye Image-To-Image Translation. 69:1-69:7 - Mehedi Hasan Raju, Dillon J. Lohr, Oleg Komogortsev:
Iris Print Attack Detection using Eye Movement Signals. 70:1-70:6
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.