default search action
Journal on Multimodal User Interfaces, Volume 15
Volume 15, Number 1, March 2021
- Felix G. Hamza-Lup, Ioana R. Goldbach:
Multimodal, visuo-haptic games for abstract theory instruction: grabbing charged particles. 1-10 - M. A. Viraj J. Muthugala, P. H. D. Arjuna S. Srimal, A. G. Buddhika P. Jayasekara:
Improving robot's perception of uncertain spatial descriptors in navigational instructions by evaluating influential gesture notions. 11-24 - Delphine Potdevin, Céline Clavel, Nicolas Sabouret:
Virtual intimacy in human-embodied conversational agent interactions: the influence of multimodality on its perception. 25-43 - Feng Feng, Puhong Li, Tony Stockman:
Exploring crossmodal perceptual enhancement and integration in a sequence-reproducing task with cognitive priming. 45-59 - Daniel P. Davison, Frances M. Wijnen, Vicky Charisi, Jan van der Meij, Dennis Reidsma, Vanessa Evers:
Words of encouragement: how praise delivered by a social robot changes children's mindset for learning. 61-76 - Lousin Moumdjian, Thomas Vervust, Joren Six, Ivan Schepers, Micheline Lesaffre, Peter Feys, Marc Leman:
The Augmented Movement Platform For Embodied Learning (AMPEL): development and reliability. 77-83 - Lousin Moumdjian, Thomas Vervust, Joren Six, Ivan Schepers, Micheline Lesaffre, Peter Feys, Marc Leman:
Correction to: The Augmented Movement Platform For Embodied Learning (AMPEL): development and reliability. 85
Volume 15, Number 2, June 2021
- Usman Malik, Mukesh Barange, Julien Saunier, Alexandre Pauchet:
A novel focus encoding scheme for addressee detection in multiparty interaction using machine learning algorithms. 1-14 - Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias Huber, Elisabeth André:
"Let me explain!": exploring the potential of virtual agents in explainable AI interaction design. 87-98 - Lucile Dupuy, Etienne de Sevin, Hélène Cassoudesalle, Orlane Ballot, Patrick Dehail, Bruno Aouizerate, Emmanuel Cuny, Jean-Arthur Micoulaud-Franchi, Pierre Philip:
Guidelines for the design of a virtual patient for psychiatric interview training. 99-107 - Matias Volonte, Reza Ghaiumy Anaraky, Rohith Venkatakrishnan, Roshan Venkatakrishnan, Bart P. Knijnenburg, Andrew T. Duchowski, Sabarish V. Babu:
Empirical evaluation and pathway modeling of visual attention to virtual humans in an appearance fidelity continuum. 109-119 - Rex Hsieh, Hisashi Sato:
Evaluation of avatar and voice transform in programming e-learning lectures. 121-129 - Timothy W. Bickmore, Everlyne Kimani, Ameneh Shamekhi, Prasanth Murali, Dhaval Parmar, Ha Trinh:
Virtual agents as supporting media for scientific presentations. 131-146 - Mohan Zalake, Fatemeh Tavassoli, Kyle Duke, Thomas J. George, François Modave, Jordan Neil, Janice L. Krieger, Benjamin Lok:
Internet-based tailored virtual human health intervention to promote colorectal cancer screening: design guidelines from two user studies. 147-162 - Justyna Swidrak, Grzegorz Pochwatko, Andrea Insabato:
Does an agent's touch always matter? Study on virtual Midas touch, masculinity, social status, and compliance in Polish men. 163-174 - Amal Abdulrahman, Deborah Richards, Hedieh Ranjbartabar, Samuel Mascarenhas:
Verbal empathy and explanation to encourage behaviour change intention. 189-199 - Minha Lee, Gale M. Lucas, Jonathan Gratch:
Comparing mind perception in strategic exchanges: human-agent negotiation, dictator and ultimatum games. 201-214 - Johnathan Mell, Markus Beissinger, Jonathan Gratch:
An expert-model and machine learning hybrid approach to predicting human-agent negotiation outcomes in varied data. 215-227 - David Obremski, Jean-Luc Lugrin, Philipp Schaper, Birgit Lugrin:
Non-native speaker perception of Intelligent Virtual Agents in two languages: the impact of amount and type of grammatical mistakes. 229-238 - Dimosthenis Kontogiorgos, André Pereira, Joakim Gustafson:
Grounding behaviours with conversational interfaces: effects of embodiment and failures. 239-254
Volume 15, Number 3, September 2021
- Fotis Liarokapis, Sebastian von Mammen, Athanasios Vourvopoulos:
Advanced multimodal interaction techniques and user interfaces for serious games and virtual environments. 255-256 - Ruixue Liu, Erin Walker, Leah Friedman, Catherine M. Arrington, Erin Treacy Solovey:
fNIRS-based classification of mind-wandering with personalized window selection for multimodal learning interfaces. 257-272 - José Mercado, Lizbeth Escobedo, Monica Tentori:
A BCI video game using neurofeedback improves the attention of children with autism. 273-281 - K. Renuga Devi, H. Hannah Inbarani:
Neighborhood based decision theoretic rough set under dynamic granulation for BCI motor imagery classification. 301-321 - Brianna J. Tomlinson, Bruce N. Walker, Emily B. Moore:
Identifying and evaluating conceptual representations for auditory-enhanced interactive physics simulations. 323-334 - Oscar Peña, Franceli L. Cibrian, Monica Tentori:
Circus in Motion: a multimodal exergame supporting vestibular therapy for children with autism. 283-299
Volume 15, Number 4, December 2021
- Hamdi Dibeklioglu, Elif Sürer, Albert Ali Salah, Thierry Dutoit:
Behavior and usability analysis for multimodal user interfaces. 335-336 - Dersu Giritlioglu, Burak Mandira, Selim Firat Yilmaz, Can Ufuk Ertenli, Berhan Faruk Akgür, Merve Kiniklioglu, Asli Gül Kurt, Emre Mutlu, Seref Can Gürel, Hamdi Dibeklioglu:
Multimodal analysis of personality traits on videos of self-presentation and induced behavior. 337-358 - Yi Li, Shreya Ghosh, Jyoti Joshi:
PLAAN: Pain Level Assessment with Anomaly-detection based Network. 359-372 - Metehan Doyran, Arjan Schimmel, Pinar Baki, Kübra Ergin, Batikan Türkmen, Almila Akdag Salah, Sander C. J. Bakkes, Heysem Kaya, Ronald Poppe, Albert Ali Salah:
MUMBAI: multi-person, multimodal board game affect and interaction analysis dataset. 373-391 - Elif Sürer, Mustafa Erkayaoglu, Zeynep Nur Öztürk, Furkan Yücel, Emin Alp Biyik, Burak Altan, Büsra Senderin, Zeliha Oguz, Servet Gürer, H. Sebnem Düzgün:
Developing a scenario-based video game generation framework for computer and virtual reality environments: a comparative usability study. 393-411 - Gökhan Ince, Rabia Yorganci, Ahmet Özkul, Taha Berkay Duman, Hatice Köse:
An audiovisual interface-based drumming system for multimodal human-robot interaction. 413-428 - Jun He, Xiaocui Yu, Bo Sun, Lejun Yu:
Facial expression and action unit recognition augmented by their dependencies on graph convolutional networks. 429-440
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.