default search action
ICMI 2020: Virtual Event, The Netherlands
- Khiet P. Truong, Dirk Heylen, Mary Czerwinski, Nadia Berthouze, Mohamed Chetouani, Mikio Nakano:
ICMI '20: International Conference on Multimodal Interaction, Virtual Event, The Netherlands, October 25-29, 2020. ACM 2020, ISBN 978-1-4503-7581-8
Keynote Talks
- Asli Özyürek:
From Hands to Brains: How Does Human Body Talk, Think and Interact in Face-to-Face Language Use? 1-2 - Atau Tanaka:
Musical Multimodal Interaction: From Bodies to Ecologies. 3 - Shrikanth Shri Narayanan:
Human-centered Multimodal Machine Intelligence. 4-5
Long Papers
- Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Ashish Jaiswal, Alexis Lueckenhoff, Maria Kyrarini, Fillia Makedon:
A Multi-modal System to Assess Cognition in Children from their Physical Movements. 6-14 - Shane D. Sims, Cristina Conati:
A Neural Architecture for Detecting User Confusion in Eye-tracking Data. 15-23 - Cigdem Beyan, Matteo Bustreo, Muhammad Shahid, Gian Luca Bailo, Nicolò Carissimi, Alessio Del Bue:
Analysis of Face-Touching Behavior in Large Scale Social Interaction Dataset. 24-32 - Felix Putze, Dennis Küster, Timo Urban, Alexander Zastrow, Marvin Kampen:
Attention Sensing through Multimodal User Modeling in an Augmented Reality Guessing Game. 33-40 - Md. Mahbubur Rahman, Mohsin Y. Ahmed, Tousif Ahmed, Bashima Islam, Viswam Nathan, Korosh Vatanparvar, Ebrahim Nemati, Daniel McCaffrey, Jilong Kuang, Jun Alex Gao:
BreathEasy: Assessing Respiratory Diseases Using Mobile Multimodal Sensors. 41-49 - Angela Constantinescu, Karin Müller, Monica Haurilet, Vanessa Petrausch, Rainer Stiefelhagen:
Bring the Environment to Life: A Sonification Module for People with Visual Impairments to Improve Situation Awareness. 50-59 - Çisem Özkul, David Geerts, Isa Rutten:
Combining Auditory and Mid-Air Haptic Feedback for a Light Switch Button. 60-69 - Michal Muszynski, Jamie Zelazny, Jeffrey M. Girard, Louis-Philippe Morency:
Depression Severity Assessment for Adolescents at High Risk of Mental Disorders. 70-78 - Nujud Aloshban, Anna Esposito, Alessandro Vinciarelli:
Detecting Depression in Less Than 10 Seconds: Impact of Speaking Time on Depression Detection Sensitivity. 79-87 - Dong-Bach Vo, Stephen A. Brewster, Alessandro Vinciarelli:
Did the Children Behave?: Investigating the Relationship Between Attachment Condition and Child Computer Interaction. 88-96 - Huili Chen, Yue Zhang, Felix Weninger, Rosalind W. Picard, Cynthia Breazeal, Hae Won Park:
Dyadic Speech-based Affect Recognition using DAMI-P2C Parent-child Multimodal Interaction Dataset. 97-106 - Andrew Emerson, Nathan L. Henderson, Jonathan P. Rowe, Wookhee Min, Seung Y. Lee, James Minogue, James C. Lester:
Early Prediction of Visitor Engagement in Science Museums with Multimodal Learning Analytics. 107-116 - Mounia Ziat, Katherine Chin, Roope Raisamo:
Effects of Visual Locomotion and Tactile Stimuli Duration on the Emotional Dimensions of the Cutaneous Rabbit Illusion. 117-124 - Shaun Alexander Macdonald, Stephen A. Brewster, Frank E. Pollick:
Eliciting Emotion with Vibrotactile Stimuli Evocative of Real-World Sensations. 125-133 - Nathan L. Henderson, Wookhee Min, Jonathan P. Rowe, James C. Lester:
Enhancing Affect Detection in Game-Based Learning Environments with Multimodal Conditional Generative Modeling. 134-143 - Ryosuke Ueno, Yukiko I. Nakano, Jie Zeng, Fumio Nihei:
Estimating the Intensity of Facial Expressions Accompanying Feedback Responses in Multiparty Video-Mediated Communication. 144-152 - Bernd Dudzik, Joost Broekens, Mark A. Neerincx, Hayley Hung:
Exploring Personal Memories and Video Content as Context for Facial Behavior in Predictions of Video-Induced Emotions. 153-162 - Oswald Barral, Sébastien Lallé, Grigorii Guz, Alireza Iranpour, Cristina Conati:
Eye-Tracking to Predict User Cognitive Abilities and Performance for User-Adaptive Narrative Visualizations. 163-173 - Lorcan Reidy, Dennis Chan, Charles Nduka, Hatice Gunes:
Facial Electromyography-based Adaptive Virtual Reality Gaming for Cognitive Training. 174-183 - Anke van Oosterhout, Miguel Bruns, Eve E. Hoggan:
Facilitating Flexible Force Feedback Design with Feelix. 184-193 - Brennan Jones, Jens Maiero, Alireza Mogharrab, Ivan A. Aguilar, Ashu Adhikari, Bernhard E. Riecke, Ernst Kruijff, Carman Neustaedter, Robert W. Lindeman:
FeetBack: Augmenting Robotic Telepresence with Haptic Feedback on the Feet. 194-203 - Brandon M. Booth, Shrikanth S. Narayanan:
Fifty Shades of Green: Towards a Robust Measure of Inter-annotator Agreement for Continuous Signals. 204-212 - Aishat Aloba, Julia Woodward, Lisa Anthony:
FilterJoint: Toward an Understanding of Whole-Body Gesture Articulation. 213-221 - Chris Zimmerer, Erik Wolf, Sara Wolf, Martin Fischbach, Jean-Luc Lugrin, Marc Erich Latoschik:
Finally on Par?! Multimodal and Unimodal Interaction for Open Creative Design Tasks in Virtual Reality. 222-231 - Lik Hang Lee, Ngo Yan Yeung, Tristan Braud, Tong Li, Xiang Su, Pan Hui:
Force9: Force-assisted Miniature Keyboard on Smart Wearables. 232-241 - Taras Kucherenko, Patrik Jonell, Sanne van Waveren, Gustav Eje Henter, Simon Alexandersson, Iolanda Leite, Hedvig Kjellström:
Gesticulator: A framework for semantically-aware speech-driven gesture generation. 242-250 - Dulanga Weerakoon, Vigneshwaran Subbaraju, Nipuni Karumpulli, Tuan Tran, Qianli Xu, U-Xuan Tan, Joo Hwee Lim, Archan Misra:
Gesture Enhanced Comprehension of Ambiguous Human-to-Robot Instructions. 251-259 - Angela Vujic, Stephanie Tong, Rosalind W. Picard, Pattie Maes:
Going with our Guts: Potentials of Wearable Electrogastrography (EGG) for Affect Detection. 260-268 - Jun Wang, Grace Ngai, Hong Va Leong:
Hand-eye Coordination for Textual Difficulty Detection in Text Summarization. 269-277 - Lian Beenhakker, Fahim A. Salim, Dees B. W. Postma, Robby van Delden, Dennis Reidsma, Bert-Jan van Beijnum:
How Good is Good Enough?: The Impact of Errors in Single Person Action Classification on the Modeling of Group Interactions in Volleyball. 278-286 - Lauren Klein, Victor Ardulov, Yuhua Hu, Mohammad Soleymani, Alma Gharib, Barbara Thompson, Pat Levitt, Maja J. Mataric:
Incorporating Measures of Intermodal Coordination in Automated Analysis of Infant-Mother Interaction. 287-295 - Nimesha Ranasinghe, Meetha Nesam James, Michael Gecawicz, Jonathan Bland, David Smith:
Influence of Electric Taste, Smell, Color, and Thermal Sensory Modalities on the Liking and Mediated Emotions of Virtual Flavor Perception. 296-304 - Leena Mathur, Maja J. Mataric:
Introducing Representations of Facial Affect in Automated Multimodal Deception Detection. 305-314 - Shun Katada, Shogo Okada, Yuki Hirano, Kazunori Komatani:
Is She Truly Enjoying the Conversation?: Analysis of Physiological Signals toward Adaptive Dialogue Systems. 315-323 - Koji Inoue, Kohei Hara, Divesh Lala, Kenta Yamamoto, Shizuka Nakamura, Katsuya Takanashi, Tatsuya Kawahara:
Job Interviewer Android with Elaborate Follow-up Question Generation. 324-332 - Soumyajit Chatterjee, Avijoy Chakma, Aryya Gangopadhyay, Nirmalya Roy, Bivas Mitra, Sandip Chakraborty:
LASO: Exploiting Locomotive and Acoustic Signatures over the Edge to Annotate IMU Data for Human Activity Recognition. 333-342 - Yanan Wang, Jianming Wu, Jinfa Huang, Gen Hattori, Yasuhiro Takishima, Shinya Wada, Rui Kimura, Jie Chen, Satoshi Kurihara:
LDNN: Linguistic Knowledge Injectable Deep Neural Network for Group Cohesiveness Understanding. 343-350 - Riku Arakawa, Hiromu Yakura:
Mimicker-in-the-Browser: A Novel Interaction Using Mimicry to Augment the Browsing Experience. 351-360 - Shen Yan, Di Huang, Mohammad Soleymani:
Mitigating Biases in Multimodal Personality Assessment. 361-369 - Sarah Morrison-Smith, Aishat Aloba, Hangwei Lu, Brett Benda, Shaghayegh Esmaeili, Gianne Flores, Jesse Smith, Nikita Soni, Isaac Wang, Rejin Joy, Damon L. Woodard, Jaime Ruiz, Lisa Anthony:
MMGatorAuth: A Novel Multimodal Dataset for Authentication Interactions in Gesture and Voice. 370-377 - Ahmed Hussen Abdelaziz, Barry-John Theobald, Paul Dixon, Reinhard Knothe, Nicholas Apostoloff, Sachin Kajareker:
Modality Dropout for Improved Performance-driven Talking Faces. 378-386 - Yiqun Yao, Verónica Pérez-Rosas, Mohamed Abouelenien, Mihai Burzo:
MORSE: MultimOdal sentiment analysis for Real-life SEttings. 387-396 - Andrea Vidal, Ali N. Salman, Wei-Cheng Lin, Carlos Busso:
MSP-Face Corpus: A Natural Audiovisual Emotional Database. 397-405 - Leili Tavabi, Kalin Stefanov, Larry Zhang, Brian Borsari, Joshua D. Woolley, Stefan Scherer, Mohammad Soleymani:
Multimodal Automatic Coding of Client Behavior in Motivational Interviewing. 406-413 - Cong Bao, Zafeirios Fountas, Temitayo A. Olugbade, Nadia Bianchi-Berthouze:
Multimodal Data Fusion based on the Global Workspace Theory. 414-422 - Shree Krishna Subburaj, Angela E. B. Stewart, Arjun Ramesh Rao, Sidney K. D'Mello:
Multimodal, Multiparty Modeling of Collaborative Problem Solving Performance. 423-432 - Ilhan Aslan, Andreas Seiderer, Chi Tai Dang, Simon Rädler, Elisabeth André:
PiHearts: Resonating Experiences of Self and Others Enabled by a Tangible Somaesthetic Design. 433-441 - Yi Ding, Radha Kumaran, Tianjiao Yang, Tobias Höllerer:
Predicting Video Affect via Induced Affection in the Wild. 442-451 - Vansh Narula, Kexin Feng, Theodora Chaspari:
Preserving Privacy in Image-based Emotion Recognition through User Anonymization. 452-460 - Patrizia Di Campli San Vito, Stephen A. Brewster, Frank E. Pollick, Simon Thompson, Lee Skrypchuk, Alexandros Mouzakitis:
Purring Wheel: Thermal and Vibrotactile Notifications on the Steering Wheel. 461-469 - Patricia Ivette Cornelio Martinez, Emanuela Maggioni, Giada Brianza, Sriram Subramanian, Marianna Obrist:
SmellControl: The Study of Sense of Agency in Smell. 470-480 - Yufeng Yin, Baiyu Huang, Yizhen Wu, Mohammad Soleymani:
Speaker-Invariant Adversarial Domain Adaptation for Emotion Recognition. 481-490 - Wei Guo, Byeong-Young Cho, Jingtao Wang:
StrategicReading: Understanding Complex Mobile Reading Strategies via Implicit Behavior Sensing. 491-500 - Amr Gomaa, Guillermo Reyes, Alexandra Alles, Lydia Rupp, Michael Feld:
Studying Person-Specific Pointing and Gaze Behavior for Multimodal Referencing of Outside Objects from a Moving Vehicle. 501-509 - Lingyu Zhang, Richard J. Radke:
Temporal Attention and Consistency Measuring for Video Question Answering. 510-518 - Parul Gupta, Komal Chugh, Abhinav Dhall, Ramanathan Subramanian:
The eyes know it: FakeET- An Eye-tracking Database to Understand Deepfake Perception. 519-527 - Béatrice Biancardi, Lou Maisonnave-Couterou, Pierrick Renault, Brian Ravenet, Maurizio Mancini, Giovanna Varni:
The WoNoWa Dataset: Investigating the Transactive Memory System in Small Group Interactions. 528-537 - Kumar Akash, Neera Jain, Teruhisa Misu:
Toward Adaptive Trust Calibration for Level 2 Driving Automation. 538-547 - Victoria Lin, Jeffrey M. Girard, Michael A. Sayette, Louis-Philippe Morency:
Toward Multimodal Modeling of Emotional Expressiveness. 548-557 - Lars Steinert, Felix Putze, Dennis Küster, Tanja Schultz:
Towards Engagement Recognition of People with Dementia in Care Settings. 558-565 - Skanda Muralidhar, Emmanuelle Patricia Kleinlogel, Eric Mayor, Adrian Bangerter, Marianne Schmid Mast, Daniel Gatica-Perez:
Understanding Applicants' Reactions to Asynchronous Video Interviews Through Self-reports and Nonverbal Cues. 566-574 - Sami Alperen Akgun, Moojan Ghafurian, Mark Crowley, Kerstin Dautenhahn:
Using Emotions to Complement Multi-Modal Human-Robot Interaction in Urban Search and Rescue Scenarios. 575-584 - Matthias Kraus, Marvin R. G. Schiller, Gregor Behnke, Pascal Bercher, Michael Dorna, Michael Dambier, Birte Glimm, Susanne Biundo, Wolfgang Minker:
"Was that successful?" On Integrating Proactive Meta-Dialogue in a DIY-Assistant using Multimodal Cues. 585-594 - Abdul Rafey Aftab, Michael von der Beeck, Michael Feld:
You Have a Point There: Object Selection Inside an Automobile Using Gaze, Head Pose and Finger Pointing. 595-603
Short Papers
- Jasper J. van Beers, Ivo V. Stuldreher, Nattapong Thammasan, Anne-Marie Brouwer:
A Comparison between Laboratory and Wearable Sensors in the Context of Physiological Synchrony. 604-608 - Toshiki Onishi, Arisa Yamauchi, Ryo Ishii, Yushi Aono, Akihiro Miyata:
Analyzing Nonverbal Behaviors along with Praising. 609-613 - Tousif Ahmed, Mohsin Y. Ahmed, Md. Mahbubur Rahman, Ebrahim Nemati, Bashima Islam, Korosh Vatanparvar, Viswam Nathan, Daniel McCaffrey, Jilong Kuang, Jun Alex Gao:
Automated Time Synchronization of Cough Events from Multimodal Sensors in Mobile Devices. 614-619 - Kumar Shubham, Emmanuelle Patricia Kleinlogel, Anaïs Butera, Marianne Schmid Mast, Dinesh Babu Jayagopi:
Conventional and Non-conventional Job Interviewing Methods: A Comparative Study in Two Countries. 620-624 - Ronald Cumbal, José Lopes, Olov Engwall:
Detection of Listener Uncertainty in Robot-Led Second Language Conversation Practice. 625-629 - Haley Lepp, Chee Wee Leong, Katrina Roohr, Michelle P. Martin-Raugh, Vikram Ramanarayanan:
Effect of Modality on Human and Machine Scoring of Presentation Videos. 630-634 - Ziyang Chen, Yu-Peng Chen, Alex Shaw, Aishat Aloba, Pavlo D. Antonenko, Jaime Ruiz, Lisa Anthony:
Examining the Link between Children's Cognitive Development and Touchscreen Interaction Patterns. 635-639 - Jari Kangas, Olli Koskinen, Roope Raisamo:
Gaze Tracker Accuracy and Precision Measurements in Virtual Reality Headsets. 640-644 - Liang Yang, Jingjie Zeng, Tao Peng, Xi Luo, Jinghui Zhang, Hongfei Lin:
Leniency to those who confess?: Predicting the Legal Judgement via Multi-Modal Analysis. 645-649 - Everlyne Kimani, Prasanth Murali, Ameneh Shamekhi, Dhaval Parmar, Sumanth Munikoti, Timothy W. Bickmore:
Multimodal Assessment of Oral Presentations using HMMs. 650-654 - Soheil Rayatdoost, David Rudrauf, Mohammad Soleymani:
Multimodal Gated Information Fusion for Emotion Recognition from EEG Signals and Facial Behaviors. 655-659 - Kalin Stefanov, Baiyu Huang, Zongjian Li, Mohammad Soleymani:
OpenSense: A Platform for Multimodal Data Acquisition and Behavior Perception. 660-664 - Jaya Narain, Kristina T. Johnson, Craig Ferguson, Amanda O'Brien, Tanya Talkar, Yue Zhang Weninger, Peter Wofford, Thomas F. Quatieri, Rosalind W. Picard, Pattie Maes:
Personalized Modeling of Real-World Vocalizations from Nonverbal Individuals. 665-669 - Margaret von Ebers, Ehsanul Haque Nirjhar, Amir H. Behzadan, Theodora Chaspari:
Predicting the Effectiveness of Systematic Desensitization Through Virtual Reality for Mitigating Public Speaking Anxiety. 670-674 - Akshat Choube, Mohammad Soleymani:
Punchline Detection using Context-Aware Hierarchical Multimodal Fusion. 675-679 - Miltiadis Marios Katsakioris, Ioannis Konstas, Pierre Yves Mignotte, Helen F. Hastie:
ROSMI: A Multimodal Corpus for Map-based Instruction-Giving. 680-684 - Murat Kirtay, Ugo Albanese, Lorenzo Vannucci, Guido Schillaci, Cecilia Laschi, Egidio Falotico:
The iCub Multisensor Datasets for Robot and Computer Vision Applications. 685-688 - Roelof Anne Jelle de Vries, Juliet A. M. Haarman, Emiel C. Harmsen, Dirk K. J. Heylen, Hermie J. Hermens:
The Sensory Interactive Table: Exploring the Social Space of Eating. 689-693 - Wail El Bani, Mohamed Chetouani:
Touch Recognition with Attentive End-to-End Model. 694-698
Doctoral Consortium Papers
- Matthias Merk:
Automating Facilitation and Documentation of Collaborative Ideation Processes. 699-702 - Mengjiong Bai:
Detection of Micro-expression Recognition Based on Spatio-Temporal Modelling and Spatial Attention. 703-707 - George-Petru Ciordas-Hertel:
How to Complement Learning Analytics with Smartwatches?: Fusing Physical Activities, Environmental Context, and Learning Activities. 708-712 - Lucien Maman:
Multimodal Groups' Analysis for Automated Cohesion Estimation. 713-717 - Ivo V. Stuldreher:
Multimodal Physiological Synchrony as Measure of Attentional Engagement. 718-722 - Madhawa Perera:
Personalised Human Device Interaction through Context aware Augmented Reality. 723-727 - B. Ashwini:
Robot Assisted Diagnosis of Autism in Children. 728-732 - Heera Lee:
Supporting Instructors to Provide Emotional and Instructional Scaffolding for English Language Learners through Biosensor-based Feedback. 733-737 - Zhitian Zhang:
Towards a Multimodal and Context-Aware Framework for Human Navigational Intent Inference. 738-742 - Mireille Fares:
Towards Multimodal Human-Like Characteristics and Expressive Visual Prosody in Virtual Agents. 743-747 - George Boateng:
Towards Real-Time Multimodal Emotion Recognition among Couples. 748-753 - Naveen Madapana:
Zero-Shot Learning for Gesture Recognition. 754-757
Demo and Exhibit Papers
- Cigdem Turan, Patrick Schramowski, Constantin A. Rothkopf, Kristian Kersting:
Alfie: An Interactive Robot with Moral Compass. 758-759 - Alejandro Peña, Ignacio Serna, Aythami Morales, Julian Fiérrez:
FairCVtest Demo: Understanding Bias in Multimodal Learning with a Testbed in Fair Automatic Recruitment. 760-761 - Sarah Ita Levitan, Xinyue Tan, Julia Hirschberg:
LieCatcher: Game Framework for Collecting Human Judgments of Deceptive Speech. 762-763 - Carla Viegas, Albert Lu, Annabel Su, Carter Strear, Yi Xu, Albert Topdjian, Daniel Limón, J. J. Xu:
Spark Creativity by Speaking Enthusiastically: Communication Training using an E-Coach. 764-765 - Edgar Rojas-Muñoz, Kyle Couperus, Juan P. Wachs:
The AI-Medic: A Multimodal Artificial Intelligent Mentor for Trauma Surgery. 766-767
Grand Challenge Papers: Emotion Recognition in the Wild Challenge
- Zehui Yu, Xiehe Huang, Xiubao Zhang, Haifeng Shen, Qun (Tracy) Li, Weihong Deng, Jian Tang, Yi Yang, Jieping Ye:
A Multi-Modal Approach for Driver Gaze Prediction to Remove Identity Bias. 768-776 - Jianming Wu, Bo Yang, Yanan Wang, Gen Hattori:
Advanced Multi-Instance Learning Method with Multi-features Engineering and Conservative Optimization for Engagement Intensity Prediction. 777-783 - Abhinav Dhall, Garima Sharma, Roland Goecke, Tom Gedeon:
EmotiW 2020: Driver Gaze, Group Emotion, Student Engagement and Physiological Signal based Challenges. 784-789 - Kui Lyu, Minghao Wang, Liyu Meng:
Extract the Gaze Multi-dimensional Information Analysis Driver Behavior. 790-797 - Boyang Tom Jin, Leila Abdelrahman, Cong Kevin Chen, Amil Khanzada:
Fusical: Multimodal Fusion for Video Sentiment. 798-806 - Chuanhe Liu, Wenqiang Jiang, Minghao Wang, Tianhao Tang:
Group Level Audio-Video Emotion Recognition Using Hybrid Networks. 807-812 - Anastasia Petrova, Dominique Vaufreydaz, Philippe Dessus:
Group-Level Emotion Recognition Using a Unimodal Privacy-Safe Non-Individual Approach. 813-820 - Sandra Ottl, Shahin Amiriparian, Maurice Gerczuk, Vincent Karas, Björn W. Schuller:
Group-level Speech Emotion Recognition Utilising Deep Spectrum Features. 821-826 - Yanan Wang, Jianming Wu, Panikos Heracleous, Shinya Wada, Rui Kimura, Satoshi Kurihara:
Implicit Knowledge Injectable Cross Attention Audiovisual Model for Group Emotion Recognition. 827-834 - Mo Sun, Jian Li, Hui Feng, Wei Gou, Haifeng Shen, Jian Tang, Yi Yang, Jieping Ye:
Multi-modal Fusion Using Spatio-temporal and Static Features for Group Emotion Recognition. 835-840 - Bin Zhu, Xinjie Lan, Xin Guo, Kenneth E. Barner, Charles Boncelet:
Multi-rate Attention Based GRU Model for Engagement Prediction. 841-848 - Shivam Srivastava, Saandeep Aathreya Sidhapur Lakshminarayan, Saurabh Hinduja, Sk Rahatul Jannat, Hamza Elhamdadi, Shaun J. Canavan:
Recognizing Emotion in the Wild using Multimodal Data. 849-857 - Lukas Stappen, Georgios Rizos, Björn W. Schuller:
X-AWARE: ConteXt-AWARE Human-Environment Attention Fusion for Driver Gaze Prediction in the Wild. 858-867
Workshops Summaries
- Heysem Kaya, Roy S. Hessels, Maryam Najafian, Sandra Hanekamp, Saeid Safavi:
Bridging Social Sciences and AI for Understanding Child Behaviour. 868-870 - Keith Curtis, George Awad, Shahzad Rajput, Ian Soboroff:
International Workshop on Deep Video Understanding. 871-873 - Zakia Hammal, Di Huang, Kévin Bailly, Liming Chen, Mohamed Daoudi:
Face and Gesture Analysis for Health Informatics. 874-875 - Hayley Hung, Gabriel Murray, Giovanna Varni, Nale Lehmann-Willenbrock, Fabiola H. Gerpott, Catharine Oertel:
Workshop on Interdisciplinary Insights into Group and Team Dynamics. 876-877 - Carlos Velasco, Anton Nijholt, Charles Spence, Takuji Narumi, Kosuke Motoki, Gijs Huisman, Marianna Obrist:
Multisensory Approaches to Human-Food Interaction. 878-880 - Itir Önal Ertugrul, Jeffrey F. Cohn, Hamdi Dibeklioglu:
Multimodal Interaction in Psychopathology. 881-882 - Dennis Küster, Felix Putze, Patrícia Alves-Oliveira, Maike Paetzel, Tanja Schultz:
Modeling Socio-Emotional and Cognitive Processes from Multimodal Data in the Wild. 883-885 - Arjan van Hessen, Silvia Calamai, Henk van den Heuvel, Stefania Scagliola, Norah Karrouche, Jeannine Beeken, Louise Corti, Christoph Draxler:
Speech, Voice, Text, and Meaning: A Multidisciplinary Approach to Interview Data through the use of digital tools. 886-887 - Theodoros Kostoulas, Michal Muszynski, Theodora Chaspari, Panos Amelidis:
Multimodal Affect and Aesthetic Experience. 888-889 - Leonardo Angelini, Mira El Kamali, Elena Mugellini, Omar Abou Khaled, Yordan Dimitrov, Vera Veleva, Zlatka Gospodinova, Nadejda Miteva, Richard Wheeler, Zoraida Callejas, David Griol, Kawtar Benghazi Akhlaki, Manuel Noguera, Panagiotis D. Bamidis, Evdokimos I. Konstantinidis, Despoina Petsani, Andoni Beristain Iraola, Dimitrios I. Fotiadis, Gérard Chollet, Inés Torres, Anna Esposito, Hannes Schlieter:
First Workshop on Multimodal e-Coaches. 890-892 - Hiroki Tanaka, Satoshi Nakamura, Jean-Claude Martin, Catherine Pelachaud:
Social Affective Multimodal Interaction for Health. 893-894 - Eleonora Ceccaldi, Benoît G. Bardy, Nadia Bianchi-Berthouze, Luciano Fadiga, Gualtiero Volpe, Antonio Camurri:
The First International Workshop on Multi-Scale Movement Technologies. 895-896
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.