default search action
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by access
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 61 matches
- 2019
- Véronique Aubergé:
The Socio-Affective Robot: Aimed to Understand Human Links? AVEC@MM 2019: 1 - Haifeng Chen, Yifan Deng, Shiwen Cheng, Yixuan Wang, Dongmei Jiang, Hichem Sahli:
Efficient Spatial Temporal Convolutional Features for Audiovisual Continuous Affect Recognition. AVEC@MM 2019: 19-26 - Weiquan Fan, Zhiwei He, Xiaofen Xing, Bolun Cai, Weirui Lu:
Multi-modality Depression Detection via Multi-scale Temporal Dilated CNNs. AVEC@MM 2019: 73-80 - Heysem Kaya, Dmitrii Fedotov, Denis Dresvyanskiy, Metehan Doyran, Danila Mamontov, Maxim Markitantov, Alkim Almila Akdag Salah, Evrim Kavcar, Alexey Karpov, Albert Ali Salah:
Predicting Depression and Emotions in the Cross-roads of Cultures, Para-linguistics, and Non-linguistics. AVEC@MM 2019: 27-35 - Yan Li, Tao Yang, Le Yang, Xiaohan Xia, Dongmei Jiang, Hichem Sahli:
A Multimodal Framework for State of Mind Assessment with Sentiment Pre-classification. AVEC@MM 2019: 13-18 - Mariana Rodrigues Makiuchi, Tifani Warnita, Kuniaki Uto, Koichi Shinoda:
Multimodal Fusion of BERT-CNN and Gated CNN Representations for Depression Detection. AVEC@MM 2019: 55-63 - Anupama Ray, Siddharth Kumar, Rutvik Reddy, Prerana Mukherjee, Ritu Garg:
Multi-level Attention Network using Text, Audio and Video for Depression Prediction. AVEC@MM 2019: 81-88 - Fabien Ringeval, Björn W. Schuller, Michel F. Valstar, Nicholas Cummins, Roddy Cowie, Leili Tavabi, Maximilian Schmitt, Sina Alisamir, Shahin Amiriparian, Eva-Maria Meßner, Siyang Song, Shuo Liu, Ziping Zhao, Adria Mallol-Ragolta, Zhao Ren, Mohammad Soleymani, Maja Pantic:
AVEC 2019 Workshop and Challenge: State-of-Mind, Detecting Depression with AI, and Cross-Cultural Affect Recognition. AVEC@MM 2019: 3-12 - Shi Yin, Cong Liang, Heyan Ding, Shangfei Wang:
A Multi-Modal Hierarchical Recurrent Neural Network for Depression Detection. AVEC@MM 2019: 65-71 - Larry Zhang, Joshua Driscol, Xiaotong Chen, Reza Hosseini Ghomi:
Evaluating Acoustic and Linguistic Features of Detecting Depression Sub-Challenge Dataset. AVEC@MM 2019: 47-53 - Jinming Zhao, Ruichen Li, Jingjun Liang, Shizhe Chen, Qin Jin:
Adversarial Domain Adaption for Multi-Cultural Dimensional Emotion Recognition in Dyadic Interactions. AVEC@MM 2019: 37-45 - Fabien Ringeval, Björn W. Schuller, Michel F. Valstar, Nicholas Cummins, Roddy Cowie, Maja Pantic:
Proceedings of the 9th International on Audio/Visual Emotion Challenge and Workshop, AVEC@MM 2019, Nice, France, October 21-25, 2019. ACM 2019, ISBN 978-1-4503-6913-8 [contents] - 2016
- Mohammadreza Amirian, Markus Kächele, Patrick Thiam, Viktor Kessler, Friedhelm Schwenker:
Continuous Multimodal Human Affect Estimation using Echo State Networks. AVEC@ACM Multimedia 2016: 67-74 - Kevin Brady, Youngjune Gwon, Pooya Khorrami, Elizabeth Godoy, William M. Campbell, Charlie K. Dagli, Thomas S. Huang:
Multi-Modal Audio, Video and Physiological Sensor Learning for Continuous Emotion Prediction. AVEC@ACM Multimedia 2016: 97-104 - Hatice Gunes:
Multimodal Analysis of Impressions and Personality in Human-Computer and Human-Robot Interactions. AVEC@ACM Multimedia 2016: 1-2 - Zhaocheng Huang, Brian Stasak, Ting Dang, Kalani Wataraka Gamage, Phu Ngoc Le, Vidhyasaharan Sethu, Julien Epps:
Staircase Regression in OA RVM, Data Selection and Gender Dependency in AVEC 2016. AVEC@ACM Multimedia 2016: 19-26 - Xingchen Ma, Hongyu Yang, Qiang Chen, Di Huang, Yunhong Wang:
DepAudioNet: An Efficient Deep Model for Audio based Depression Classification. AVEC@ACM Multimedia 2016: 35-42 - Md. Nasir, Arindam Jati, Prashanth Gurunath Shivakumar, Sandeep Nallan Chakravarthula, Panayiotis G. Georgiou:
Multimodal and Multiresolution Depression Detection from Speech and Facial Landmark Features. AVEC@ACM Multimedia 2016: 43-50 - Anastasia Pampouchidou, Olympia Simantiraki, Amir Fazlollahi, Matthew Pediaditis, Dimitris Manousos, Alexandros Roniotis, Giorgos A. Giannakakis, Fabrice Mériaudeau, Panagiotis G. Simos, Kostas Marias, Fan Yang, Manolis Tsiknakis:
Depression Assessment by Fusing High and Low Level Features from Audio, Video, and Text. AVEC@ACM Multimedia 2016: 27-34 - Filip Povolný, Pavel Matejka, Michal Hradis, Anna Popková, Lubomír Otrusina, Pavel Smrz, Ian D. Wood, Cécile Robin, Lori Lamel:
Multimodal Emotion Recognition for AVEC 2016 Challenge. AVEC@ACM Multimedia 2016: 75-82 - Krishna Somandepalli, Rahul Gupta, Md. Nasir, Brandon M. Booth, Sungbok Lee, Shrikanth S. Narayanan:
Online Affect Tracking with Multimodal Kalman Filters. AVEC@ACM Multimedia 2016: 59-66 - Bo Sun, Siming Cao, Liandong Li, Jun He, Lejun Yu:
Exploring Multimodal Visual Features for Continuous Affect Recognition. AVEC@ACM Multimedia 2016: 83-88 - Michel F. Valstar, Jonathan Gratch, Björn W. Schuller, Fabien Ringeval, Denis Lalanne, Mercedes Torres, Stefan Scherer, Giota Stratou, Roddy Cowie, Maja Pantic:
AVEC 2016: Depression, Mood, and Emotion Recognition Workshop and Challenge. AVEC@ACM Multimedia 2016: 3-10 - Raphaël Weber, Vincent Barrielle, Catherine Soladié, Renaud Séguier:
High-Level Geometry-based Features of Video Modality for Emotion Prediction. AVEC@ACM Multimedia 2016: 51-58 - James R. Williamson, Elizabeth Godoy, Miriam Cha, Adrianne Schwarzentruber, Pooya Khorrami, Youngjune Gwon, H. T. Kung, Charlie K. Dagli, Thomas F. Quatieri:
Detecting Depression using Vocal, Facial and Semantic Communication Cues. AVEC@ACM Multimedia 2016: 11-18 - Le Yang, Dongmei Jiang, Lang He, Ercheng Pei, Meshia Cédric Oveneke, Hichem Sahli:
Decision Tree Based Depression Classification from Audio Video and Language Information. AVEC@ACM Multimedia 2016: 89-96 - Michel F. Valstar, Jonathan Gratch, Björn W. Schuller, Fabien Ringeval, Roddy Cowie, Maja Pantic:
Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge, AVEC@MM 2016, Amsterdam, The Netherlands, October 16, 2016. ACM 2016, ISBN 978-1-4503-4516-3 [contents] - 2015
- Patrick Cardinal, Najim Dehak, Alessandro Lameiras Koerich, Jahangir Alam, Patrice Boucher:
ETS System for AV+EC 2015 Challenge. AVEC@ACM Multimedia 2015: 17-23 - Linlin Chao, Jianhua Tao, Minghao Yang, Ya Li, Zhengqi Wen:
Long Short Term Memory Recurrent Neural Network based Multimodal Dimensional Emotion Recognition. AVEC@ACM Multimedia 2015: 65-72 - Shizhe Chen, Qin Jin:
Multi-modal Dimensional Emotion Recognition using Recurrent Neural Networks. AVEC@ACM Multimedia 2015: 49-56
skipping 31 more matches
loading more results
failed to load more results, please try again later
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
retrieved on 2024-10-03 16:27 CEST from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint