![](https://dblp.uni-trier.de./img/logo.320x120.png)
![search dblp search dblp](https://dblp.uni-trier.de./img/search.dark.16x16.png)
![search dblp](https://dblp.uni-trier.de./img/search.dark.16x16.png)
default search action
8th ACII 2019: Cambridge, UK
- 8th International Conference on Affective Computing and Intelligent Interaction, ACII 2019, Cambridge, United Kingdom, September 3-6, 2019. IEEE 2019, ISBN 978-1-7281-3888-6
- Judith Ley-Flores, Frédéric Bevilacqua, Nadia Bianchi-Berthouze
, Ana Tajadura-Jiménez
:
Altering body perception and emotion in physically inactive people through movement sonification. 1-7 - Johnathan Mell, Jonathan Gratch, Reyhan Aydogan
, Tim Baarslag, Catholijn M. Jonker:
The Likeability-Success Tradeoff: Results of the 2nd Annual Human-Agent Automated Negotiating Agents Competition. 1-7 - Léo Hemamou, Ghazi Felhi, Jean-Claude Martin, Chloé Clavel:
Slices of Attention in Asynchronous Video Job Interviews. 1-7 - Naoki Tateyama, Kazutaka Ueda, Masayuki Nakao:
Development of an active sensing system for distress detection using skin conductance response. 1-6 - Nusrah Hussain, Engin Erzin
, T. Metin Sezgin
, Yücel Yemez:
Batch Recurrent Q-Learning for Backchannel Generation Towards Engaging Agents. 1-7 - Alireza Sepas-Moghaddam
, S. Ali Etemad, Paulo Lobato Correia
, Fernando Pereira:
A Deep Framework for Facial Emotion Recognition using Light Field Images. 1-7 - Asma Ghandeharioun, Daniel McDuff, Mary Czerwinski, Kael Rowan:
EMMA: An Emotion-Aware Wellbeing Chatbot. 1-7 - Shashank Jaiswal, Siyang Song, Michel F. Valstar:
Automatic prediction of Depression and Anxiety from behaviour and personality attributes. 1-7 - Eleuda Nuñez, Masakazu Hirokawa
, Monica Perusquía-Hernández, Kenji Suzuki
:
Effect on Social Connectedness and Stress Levels by Using a Huggable Interface in Remote Communication. 1-7 - Héctor López-Carral, Diogo Santos Pata, Riccardo Zucca
, Paul F. M. J. Verschure:
How you type is what you type: Keystroke dynamics correlate with affective content. 1-5 - Patrick O'Toole
, Donald Glowinski
, Maurizio Mancini
:
Understanding Chromaesthesia by Strengthening Auditory -Visual-Emotional Associations. 1-7 - Md. Kamrul Hasan, Taylan K. Sen, Yiming Yang, Raiyan Abdul Baten, Kurtis Glenn Haut, Mohammed Ehsan Hoque:
LIWC into the Eyes: Using Facial Features to Contextualize Linguistic Analysis in Multimodal Communication. 1-7 - Angela Chan, Niloofar Zarei, Takashi Yamauchi, Jinsil Hwaryoung Seo, Francis K. H. Quek:
Touch Media: Investigating the Effects of Remote Touch on Music-based Emotion Elicitation. 1-7 - Mohammad Rafayet Ali, Taylan K. Sen, Viet-Duy Nguyen, Mohammed Ehsan Hoque, Ronald M. Epstein, Reza Rawassizadeh, Paul R. Duberstein:
What Computers Can Teach Us About Doctor-Patient Communication: Leveraging Gender Differences in Cancer Care. 1-7 - Charlie Hewitt, Marwa Mahmoud:
Pose-Informed Face Alignment for Extreme Head Pose Variations in Animals. 1-6 - Jimin Rhim, Anthony Cheung, David Pham, Subin Bae, Zhitian Zhang, Trista Townsend, Angelica Lim:
Investigating Positive Psychology Principles in Affective Robotics. 1-7 - Rens Hoegen, Jonathan Gratch, Brian Parkinson, Danielle Shore
:
Signals of Emotion Regulation in a Social Dilemma: Detection from Face and Context. 1-7 - Joy Egede, Michel F. Valstar, Mercedes Torres Torres, Don Sharkey:
Automatic Neonatal Pain Estimation: An Acute Pain in Neonates Database. 1-7 - Katie Seaborn
, Nina Lee, Marla Narazani, Atsushi Hiyama:
Intergenerational shared action games for promoting empathy between Japanese youth and elders. 1-7 - Florian Grond, M. Ariel Cascio, Rossio Motta-Ochoa, Tamar Tembeck, Dan Ten Veen, Stefanie Blain-Moraes:
Participatory design of biomusic with users on the autism spectrum. 1-7 - Christine Spencer, Daniel Moore, Gary McKeown
, Lucy Rutherford, Gawain Morrison:
Context matters: protocol ordering effects on physiological arousal and experienced stress during a simulated driving task. 1-7 - Hannes Ritschel
, Ilhan Aslan, Silvan Mertes, Andreas Seiderer
, Elisabeth André
:
Personalized Synthesis of Intentional and Emotional Non-Verbal Sounds for Social Robots. 1-7 - Everlyne Kimani, Kael Rowan, Daniel McDuff, Mary Czerwinski, Gloria Mark:
A Conversational Agent in Support of Productivity and Wellbeing at Work. 1-7 - Konstantinos Makantasis, Antonios Liapis, Georgios N. Yannakakis
:
From Pixels to Affect: A Study on Games and Player Experience. 1-7 - Béatrice Biancardi
, Chen Wang, Maurizio Mancini
, Angelo Cafaro, Guillaume Chanel
, Catherine Pelachaud
:
A Computational Model for Managing Impressions of an Embodied Conversational Agent in Real-Time. 1-7 - Özge Nilay Yalçin:
Evaluating Empathy in Artificial Agents. 1-7 - Victor R. Martinez, Anil Ramakrishna, Ming-Chang Chiu, Karan Singla, Shrikanth S. Narayanan:
A system for the 2019 Sentiment, Emotion and Cognitive State Task of DARPA's LORELEI project. 1-6 - Rifca Peters, Joost Broekens, Kangqi Li, Mark A. Neerincx:
Robots Expressing Dominance: Effects of Behaviours and Modulation. 1-7 - Tanja Schneeberger, Sofie Ehrhardt
, Manuel S. Anglet, Patrick Gebhard:
Would you Follow my Instructions if I was not Human? Examining Obedience towards Virtual Agents. 1-7 - Nicolas Beaudoin-Gagnon, Alexis Fortin-Côté, Cindy Chamberland, Ludovic Lefebvre, Jérémy Bergeron-Boucher, Alexandre Campeau-Lecours
, Sébastien Tremblay
, Philip L. Jackson
:
The FUNii Database: A Physiological, Behavioral, Demographic and Subjective Video Game Database for Affective Gaming and Player Experience Research. 1-7 - Mojgan Hashemian, Rui Prada, Pedro Alexandre Santos
, João Dias
, Samuel Mascarenhas:
Inferring Emotions from Touching Patterns. 1-7 - Eddie Huang, Hannah Valdiviejas
, Nigel Bosch
:
I'm Sure! Automatic Detection of Metacognition in Online Course Discussion Forums. 1-7 - Pritam Sarkar, Kyle Ross, Aaron J. Ruberto, Dirk Rodenburg, Paul Hungler, Ali Etemad:
Classification of Cognitive Load and Expertise for Adaptive Simulation using Deep Multitask Learning. 1-7 - Jesús Joel Rivas, Felipe Orihuela-Espina
, Luis Enrique Sucar, Amanda C. de C. Williams, Nadia Bianchi-Berthouze
:
Automatic Recognition of Multiple Affective States in Virtual Rehabilitation by Exploiting the Dependency Relationships. 1-7 - Megha Yadav, Md. Nazmus Sakib
, Kexin Feng, Theodora Chaspari, Amir H. Behzadan:
Virtual reality interfaces and population-specific models to mitigate public speaking anxiety. 1-7 - Nattapong Thammasan, Ivo V. Stuldreher, Dagmar Wismeijer, Mannes Poel, Jan B. F. van Erp, Anne-Marie Brouwer
:
A novel, simple and objective method to detect movement artefacts in electrodermal activity. 1-7 - Hao-Chun Yang
, Chi-Chun Lee
:
Annotation Matters: A Comprehensive Study on Recognizing Intended, Self-reported, and Observed Emotion Labels using Physiology. 1-7 - Bernhard Anzengruber-Tanase, Alois Ferscha, Martin Schobesberger:
Attention/Distraction Estimation for Surgeons during Laparoscopic Cholecystectomies. 1-7 - Ross Harper, Joshua Southern:
End-To-End Prediction of Emotion from Heartbeat Data Collected by a Consumer Fitness Tracker. 1-7 - Erica Volta, Radoslaw Niewiadomski
, Temitayo A. Olugbade, Carla Gilio, Elena Cocchi, Nadia Berthouze
, Monica Gori, Gualtiero Volpe:
Analysis of cognitive states during bodily exploration of mathematical concepts in visually impaired children. 1-7 - Kushal Chawla, Sopan Khosla, Niyati Chhaya, Kokil Jaidka
:
Pre-trained Affective Word Representations. 1-7 - Mohammed Abdel-Wahab, Carlos Busso
:
Active Learning for Speech Emotion Recognition Using Deep Neural Network. 1-7 - Min Peng, Chongyang Wang
, Tao Bi
, Yu Shi, Xiangdong Zhou, Tong Chen
:
A Novel Apex-Time Network for Cross-Dataset Micro-Expression Recognition. 1-6 - Frank Kaptein, Joost Broekens, Koen V. Hindriks
, Mark A. Neerincx:
Evaluating Cognitive and Affective Intelligent Agent Explanations in a Long-Term Health-Support Application for Children with Type 1 Diabetes. 1-7 - Eleonora Ceccaldi
, Nale Lehmann-Willenbrock, Erica Volta, Mohamed Chetouani
, Gualtiero Volpe, Giovanna Varni:
How unitizing affects annotation of cohesion. 1-7 - Mani Kumar Tellamekala, Michel F. Valstar:
Temporally Coherent Visual Representations for Dimensional Affect Recognition. 1-7 - Pablo V. A. Barros
, Nikhil Churamani
, Angelica Lim, Stefan Wermter
:
The OMG-Empathy Dataset: Evaluating the Impact of Affective Behavior in Storytelling. 1-7 - Grace Leslie, Asma Ghandeharioun, Diane Y. Zhou, Rosalind W. Picard:
Engineering Music to Slow Breathing and Invite Relaxed Physiology. 1-7 - Belén Saldías F., Rosalind W. Picard:
Tweet Moodifier: Towards giving emotional awareness to Twitter users. 1-7 - Paul H. Bucci, Xi Laura Cang, Hailey Mah, Laura Rodgers, Karon E. MacLean
:
Real Emotions Don't Stand Still: Toward Ecologically Viable Representation of Affective Interaction. 1-7 - Shiro Kumano, Keishi Nomura
:
Multitask Item Response Models for Response Bias Removal from Affective Ratings. 1-7 - Asma Ghandeharioun, Daniel McDuff, Mary Czerwinski, Kael Rowan:
Towards Understanding Emotional Intelligence for Behavior Change Chatbots. 8-14 - Tilman Krokotsch, Ronald Böck:
Generative Adversarial Networks and Simulated+Unsupervised Learning in Affect Recognition from Speech. 28-34 - Dominik Seuss
, Anja Dieckmann, Teena Hassan, Jens-Uwe Garbas, Johann Heinrich Ellgring, Marcello Mortillaro
, Klaus R. Scherer:
Emotion Expression from Different Angles: A Video Database for Facial Expressions of Actors Shot by a Camera Array. 35-41 - Diego Fabiano, Shaun J. Canavan:
Emotion Recognition Using Fused Physiological Signals. 42-48 - Brennon Bortz, Javier Jaimovich, R. Benjamin Knapp:
Cross-Cultural Comparisons of Affect and Electrodermal Measures While Listening to Music. 55-61 - Sara Evensen, Yoshihiko Suhara, Alon Y. Halevy, Vivian Li, Wang-Chiew Tan, Saran Mumick:
Happiness Entailment: Automating Suggestions for Well-Being. 62-68 - Ricardo Rodrigues
, Ricardo Silva, Ricardo Pereira, Carlos Martinho:
Interactive Empathic Virtual Coaches Based on the Social Regulatory Cycle. 69-75 - Jian Shen, Xiaowei Zhang, Junlei Li, Yuanxi Li
, Lei Feng, Changqing Hu, Zhijie Ding, Gang Wang, Bin Hu:
Depression Detection from Electroencephalogram Signals Induced by Affective Auditory Stimuli. 76-82 - Tzu-Yun Huang, Jeng-Lin Li, Chun-Min Chang, Chi-Chun Lee
:
A Dual-Complementary Acoustic Embedding Network Learned from Raw Waveform for Speech Emotion Recognition. 83-88 - Takashi Yamauchi, Anton Leontyev, Moein Razavi:
Assessing Emotion by Mouse-cursor Tracking: Theoretical and Empirical Rationales. 89-95 - Muhammad Johan Alibasa
, Rafael A. Calvo:
Supporting Mood Introspection from Digital Footprints. 96-101 - Elizabeth Camilleri, Georgios N. Yannakakis
, David Melhart, Antonios Liapis:
PyPLT: Python Preference Learning Toolbox. 102-108 - Alexander Heimerl, Tobias Baur, Florian Lingenfelser, Johannes Wagner, Elisabeth André
:
NOVA - A tool for eXplainable Cooperative Machine Learning. 109-115 - Nesrine Fourati
, Catherine Pelachaud, Patrice Darmon:
Contribution of temporal and multi-level body cues to emotion classification. 116-122 - Gelareh Mohammadi
, Kangying Lin, Patrik Vuilleumier
:
Towards Understanding Emotional Experience in a Componential Framework. 123-129 - David Melhart, Antonios Liapis, Georgios N. Yannakakis
:
PAGAN: Video Affect Annotation Made Easy. 130-136 - Jonny O'Dwyer
, Niall Murray, Ronan Flynn
:
Eye-based Continuous Affect Prediction. 137-143 - Morten Roed Frederiksen
, Kasper Støy:
Augmenting the audio-based expression modality of a non-affective robot. 144-149 - Joost Broekens, Laduona Dai
:
A TDRL Model for the Emotion of Regret. 150-156 - Jiahui Hu, Bing Yu, Yun Yang, Bailan Feng:
Towards Facial De-Expression and Expression Recognition in the Wild. 157-163 - Tanja Schneeberger, Mirella Scholtes, Bernhard Hilpert
, Markus Langer, Patrick Gebhard:
Can Social Agents elicit Shame as Humans do? 164-170 - Jeng-Lin Li, Chi-Chun Lee
:
Attention Learning with Retrievable Acoustic Embedding of Personality for Emotion Recognition. 171-177 - Koustuv Saha, Raghu Mulukutla, Kari Nies, Pablo Robles-Granda, Anusha Sirigiri, Dong Whi Yoo
, Pino G. Audia, Andrew T. Campbell, Nitesh V. Chawla
, Sidney K. D'Mello, Anind K. Dey, Manikanta D. Reddy
, Kaifeng Jiang
, Qiang Liu, Gloria Mark, Edward Moskal, Aaron Striegel, Munmun De Choudhury, Vedant Das Swain
, Julie M. Gregg, Ted Grover, Suwen Lin, Gonzalo J. Martínez, Stephen M. Mattingly, Shayan Mirjafari
:
Imputing Missing Social Media Data Stream in Multisensor Studies of Human Behavior. 178-184 - Jian Huang, Jianhua Tao, Bin Liu, Zhen Lian, Mingyue Niu:
Efficient Modeling of Long Temporal Contexts for Continuous Emotion Recognition. 185-191 - Shogo Okada
, Ken Inoue, Toru Imai, Mami Noguchi, Kaiko Kuwamura:
Dementia Scale Classification Based on Ubiquitous Daily Activity and Interaction Sensing. 192-198 - Monica Perusquía-Hernández, Saho Ayabe-Kanamura, Kenji Suzuki
:
Posed and spontaneous smile assessment with wearable skin conductance measured from the neck and head movement. 199-205 - Bernd Dudzik
, Michel-Pierre Jansen, Franziska Burger, Frank Kaptein, Joost Broekens, Dirk K. J. Heylen, Hayley Hung, Mark A. Neerincx, Khiet P. Truong:
Context in Human Emotion Perception for Automatic Affect Detection: A Survey of Audiovisual Databases. 206-212 - Daniel McDuff, Jeffrey M. Girard
:
Democratizing Psychological Insights from Analysis of Nonverbal Behavior. 220-226 - Meishu Song, Zijiang Yang, Alice Baird, Emilia Parada-Cabaleiro, Zixing Zhang, Ziping Zhao, Björn W. Schuller
:
Audiovisual Analysis for Recognising Frustration during Game-Play: Introducing the Multimodal Game Frustration Database. 517-523 - Carolyn Saund, Marion Roth, Mathieu Chollet
, Stacy Marsella:
Multiple metaphors in metaphoric gesturing. 524-530 - Samuel Spaulding, Cynthia Breazeal:
Frustratingly Easy Personalization for Real-time Affect Interpretation of Facial Expression. 531-537 - Le Yang, Itir Önal Ertugrul, Jeffrey F. Cohn, Zakia Hammal, Dongmei Jiang, Hichem Sahli
:
FACS3D-Net: 3D Convolution based Spatiotemporal Representation for Action Unit Detection. 538-544 - Mina Marmpena, Angelica Lim, Torbjørn S. Dahl, Nikolas Hemion:
Generating robotic emotional body language with variational autoencoders. 545-551 - Esam Ghaleb
, Mirela Popa, Stylianos Asteriadis
:
Multimodal and Temporal Perception of Audio-visual Cues for Emotion Recognition. 552-558 - Matthew Lewis, Lola Cañamero:
A Robot Model of Stress-Induced Compulsive Behavior. 559-565 - Youngjun Cho, Nadia Bianchi-Berthouze
, Manuel Fradinho Oliveira, Catherine Holloway, Simon Julier:
Nose Heat: Exploring Stress-induced Nasal Thermal Variability through Mobile Thermal Imaging. 566-572 - Batuhan Sayis, Ciera Crowell, Juan Pedro Benitez, Rafael Ramirez, Narcís Parés
:
Computational Modeling of Psycho-physiological Arousal and Social Initiation of children with Autism in Interventions through Full-Body Interaction. 573-579 - Samiha Samrose, Wenyi Chu, Carolina He, Yuebai Gao, Syeda Sarah Shahrin, Zhen Bai, Mohammed Ehsan Hoque:
Visual Cues for Disrespectful Conversation Analysis. 580-586 - Judy Hanwen Shen, Àgata Lapedriza, Rosalind W. Picard:
Unintentional affective priming during labeling may bias labels. 587-593 - Jeffrey M. Girard
, Gayatri Shandar, Zhun Liu, Jeffrey F. Cohn, Lijun Yin, Louis-Philippe Morency:
Reconsidering the Duchenne Smile: Indicator of Positive Emotion or Artifact of Smile Intensity? 594-599 - Brandon M. Booth, Shrikanth S. Narayanan:
Trapezoidal Segmented Regression: A Novel Continuous-scale Real-time Annotation Approximation Algorithm. 600-606 - Benjamin Ma, Timothy Greer, Matthew E. Sachs, Assal Habibi, Jonas T. Kaplan, Shrikanth Narayanan:
Predicting Human-Reported Enjoyment Responses in Happy and Sad Music. 607-613 - Tipporn Laohakangvalvit
, Tiranee Achalakul, Michiko Ohkura
:
Comparison on Evaluation of Kawaiiness of Cosmetic Bottles between Japanese and Thai People. 614-619 - Ajjen Joshi, Youssef Attia, Taniya Mishra:
Protocol for Eliciting Driver Frustration in an In-vehicle Environment. 620-626 - Su Lei, Jonathan Gratch:
Smiles Signal Surprise in a Social Dilemma. 627-633 - Md. Kamrul Hasan, Wasifur Rahman, Luke Gerstner, Taylan K. Sen, Sangwu Lee, Kurtis Glenn Haut, Mohammed E. Hoque:
Facial Expression Based Imagination Index and a Transfer Learning Approach to Detect Deception. 634-640 - Everlyne Kimani, Timothy W. Bickmore, Ha Trinh, Paola Pedrelli
:
You'll be Great: Virtual Agent-based Cognitive Restructuring to Reduce Public Speaking Anxiety. 641-647 - Zhengxuan Wu, Xiyu Zhang, Zhi-Xuan Tan, Jamil Zaki, Desmond C. Ong
:
Attending to Emotional Narratives. 648-654 - Raiyan Abdul Baten, Famous Clark, Mohammed E. Hoque:
Upskilling Together: How Peer-interaction Influences Speaking-skills Development Online. 662-668 - Nathan L. Henderson, Andrew Emerson, Jonathan P. Rowe, James C. Lester:
Improving Sensor-Based Affect Detection with Multimodal Data Imputation. 669-675 - Deniece S. Nazareth, Michel-Pierre Jansen, Khiet P. Truong, Gerben J. Westerhof
, Dirk Heylen:
MEMOA: Introducing the Multi-Modal Emotional Memories of Older Adults Database. 697-703 - Surjya Ghosh
, Shivam Goenka, Niloy Ganguly
, Bivas Mitra, Pradipta De:
Representation Learning for Emotion Recognition from Smartphone Keyboard Interactions. 704-710 - Lisa E. Rombout, Marie Postma-Nilsenová
:
Exploring a Voice Illusion. 711-717 - Mia Atcheson, Vidhyasaharan Sethu
, Julien Epps:
Using Gaussian Processes with LSTM Neural Networks to Predict Continuous-Time, Dimensional Emotion in Ambiguous Speech. 718-724 - Shin Katayama, Akhil Mathur, Marc Van den Broeck, Tadashi Okoshi, Jin Nakazawa, Fahim Kawsar:
Situation-Aware Emotion Regulation of Conversational Agents with Kinetic Earables. 725-731 - Siddique Latif
, Junaid Qadir, Muhammad Bilal:
Unsupervised Adversarial Domain Adaptation for Cross-Lingual Speech Emotion Recognition. 732-737 - Deboshree Bose, Ting Dang
, Vidhyasaharan Sethu
, Eliathamby Ambikairajah
, Sarith Fernando:
A Novel Bag-of-Optimised-Clusters Front-End for Speech based Continuous Emotion Prediction. 738-744
![](https://dblp.uni-trier.de./img/cog.dark.24x24.png)
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.