default search action
Journal on Multimodal User Interfaces, Volume 7
Volume 7, Numbers 1-2, March 2013
- Patrizia Paggio, Dirk Heylen, Michael Kipp:
Preface. 1-3 - Andy Lücking, Kirsten Bergmann, Florian Hahn, Stefan Kopp, Hannes Rieser:
Data-based analysis of speech and gesture: the Bielefeld Speech and Gesture Alignment corpus (SaGA) and its applications. 5-18 - Catharine Oertel, Fred Cummins, Jens Edlund, Petra Wagner, Nick Campbell:
D64: a corpus of richly recorded conversational interaction. 19-28 - Patrizia Paggio, Costanza Navarretta:
Head movements, facial expressions and feedback in conversations: empirical evidence from Danish multimodal data. 29-37 - Dairazalia Sanchez-Cortes, Oya Aran, Dinesh Babu Jayagopi, Marianne Schmid Mast, Daniel Gatica-Perez:
Emergent leaders through looking and speaking: from audio-visual data to multimodal recognition. 39-53 - Brigitte Bigi, Cristel Portes, Agnès Steuckardt, Marion Tellier:
A multimodal study of answers to disruptions. 55-66 - Isabella Poggi, Francesca D'Errico, Laura Vincze:
Comments by words, face and body. 67-78 - Xavier Alameda-Pineda, Jordi Sanchez-Riera, Johannes Wienke, Vojtech Franc, Jan Cech, Kaustubh Kulkarni, Antoine Deleforge, Radu Horaud:
RAVEL: an annotated corpus for training robots with audiovisual abilities. 79-91 - Anthony Fleury, Michel Vacher, François Portet, Pedro Chahuara, Norbert Noury:
A french corpus of audio and multimodal interactions in a health smart home. 93-109 - Michel Dubois, Damien Dupré, Jean-Michel Adam, Anna Tcherkassof, Nadine Mandran, Brigitte Meillon:
The influence of facial interface design on dynamic emotional recognition. 111-119 - George Caridakis, Johannes Wagner, Amaryllis Raouzaiou, Florian Lingenfelser, Kostas Karpouzis, Elisabeth André:
A cross-cultural, multimodal, affective corpus for gesture expressivity analysis. 121-134 - Jocelynn Cu, Katrina Ysabel Solomon, Merlin Teodosia C. Suarez, Madelene Sta. Maria:
A multimodal emotion corpus for Filipino and its uses. 135-142 - Marko Tkalcic, Andrej Kosir, Jurij F. Tasic:
The LDOS-PerAff-1 corpus of facial-expression video clips with affective, personality and user-interaction metadata. 143-155 - Slim Essid, Xinyu Lin, Marc Gowing, Georgios Kordelas, Anil Aksay, Philip Kelly, Thomas Fillon, Qianni Zhang, Alfred Dielmann, Vlado Kitanovski, Robin Tournemenne, Aymeric Masurelle, Ebroul Izquierdo, Noel E. O'Connor, Petros Daras, Gaël Richard:
A multi-modal dance corpus for research into interaction between humans in virtual environments. 157-170
Volume 7, Number 3, November 2013
- Deborah A. Dahl:
The W3C multimodal architecture and interfaces standard. 171-182 - Dirk Schnelle-Walka, Stefan Radomski, Max Mühlhäuser:
JVoiceXML as a modality component in the W3C multimodal architecture. 183-194 - Kostas Karpouzis, George Caridakis, Roddy Cowie, Ellen Douglas-Cowie:
Induction, recording and recognition of natural emotions from facial expressions and speech prosody. 195-206 - Christopher McMurrough, Vangelis Metsis, Dimitrios I. Kosmopoulos, Ilias Maglogiannis, Fillia Makedon:
A dataset for point of gaze detection using head poses and eye images. 207-215 - Jyoti Joshi, Roland Goecke, Sharifa Alghowinem, Abhinav Dhall, Michael Wagner, Julien Epps, Gordon Parker, Michael Breakspear:
Multimodal assistive technologies for depression diagnosis and monitoring. 217-228 - Christian Peter, Andreas Kreiner, Martin Schröter, Hyosun Kim, Gerald Bieber, Fredrik Öhberg, Kei Hoshi, Eva Lindh Waterworth, John A. Waterworth, Soledad Ballesteros:
AGNES: Connecting people in a multimodal way. 229-245 - Randy Klaassen, Rieks op den Akker, Tine Lavrysen, Susan van Wissen:
User preferences for multi-device context-aware feedback in a digital coaching system. 247-267
Volume 7, Number 4, December 2013
- Andrea Sanna, Fabrizio Lamberti, Gianluca Paravati, Felipe Domingues Rocha:
A kinect-based interface to animate virtual characters. 269-279 - Mahmoud Ghorbel, Stéphane Betgé-Brezetz, Marie-Pascale Dupont, Guy-Bertrand Kamga, Sophie Piekarec, Juliette Reerink, Arnaud Vergnol:
Multimodal notification framework for elderly and professional in a smart nursing home. 281-297 - Felix Schüssel, Frank Honold, Michael Weber:
Influencing factors on multimodal interaction during selection tasks. 299-310 - Matthieu Courgeon, Céline Clavel:
MARC: a framework that features emotion models for facial animation during human-computer interaction. 311-319 - Elena Vildjiounaite, Daniel Schreiber, Vesa Kyllönen, Marcus Ständer, Ilkka Niskanen, Jani Mäntyjärvi:
Prediction of interface preferences with a classifier selection approach. 321-349 - Nadia Elouali, José Rouillard, Xavier Le Pallec, Jean-Claude Tarby:
Multimodal interaction: a survey from model driven engineering and mobile perspectives. 351-370
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.